00:00:00.000 Started by upstream project "autotest-per-patch" build number 132839 00:00:00.000 originally caused by: 00:00:00.000 Started by user sys_sgci 00:00:00.067 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.069 The recommended git tool is: git 00:00:00.069 using credential 00000000-0000-0000-0000-000000000002 00:00:00.072 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.095 Fetching changes from the remote Git repository 00:00:00.107 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.145 Using shallow fetch with depth 1 00:00:00.145 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.145 > git --version # timeout=10 00:00:00.182 > git --version # 'git version 2.39.2' 00:00:00.182 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.211 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.211 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.059 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.072 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.084 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:05.084 > git config core.sparsecheckout # timeout=10 00:00:05.094 > git read-tree -mu HEAD # timeout=10 00:00:05.107 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:05.124 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:05.125 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:05.248 [Pipeline] Start of Pipeline 00:00:05.261 [Pipeline] library 00:00:05.263 Loading library shm_lib@master 00:00:05.263 Library shm_lib@master is cached. Copying from home. 00:00:05.279 [Pipeline] node 00:00:05.292 Running on VM-host-WFP7 in /var/jenkins/workspace/raid-vg-autotest 00:00:05.293 [Pipeline] { 00:00:05.304 [Pipeline] catchError 00:00:05.305 [Pipeline] { 00:00:05.318 [Pipeline] wrap 00:00:05.326 [Pipeline] { 00:00:05.334 [Pipeline] stage 00:00:05.336 [Pipeline] { (Prologue) 00:00:05.353 [Pipeline] echo 00:00:05.354 Node: VM-host-WFP7 00:00:05.360 [Pipeline] cleanWs 00:00:05.370 [WS-CLEANUP] Deleting project workspace... 00:00:05.370 [WS-CLEANUP] Deferred wipeout is used... 00:00:05.376 [WS-CLEANUP] done 00:00:05.570 [Pipeline] setCustomBuildProperty 00:00:05.718 [Pipeline] httpRequest 00:00:08.738 [Pipeline] echo 00:00:08.739 Sorcerer 10.211.164.101 is dead 00:00:08.747 [Pipeline] httpRequest 00:00:11.769 [Pipeline] echo 00:00:11.771 Sorcerer 10.211.164.101 is dead 00:00:11.780 [Pipeline] httpRequest 00:00:11.839 [Pipeline] echo 00:00:11.841 Sorcerer 10.211.164.96 is dead 00:00:11.850 [Pipeline] httpRequest 00:00:12.319 [Pipeline] echo 00:00:12.320 Sorcerer 10.211.164.20 is alive 00:00:12.332 [Pipeline] retry 00:00:12.334 [Pipeline] { 00:00:12.343 [Pipeline] httpRequest 00:00:12.346 HttpMethod: GET 00:00:12.347 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:12.347 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:12.349 Response Code: HTTP/1.1 200 OK 00:00:12.349 Success: Status code 200 is in the accepted range: 200,404 00:00:12.349 Saving response body to /var/jenkins/workspace/raid-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:12.503 [Pipeline] } 00:00:12.514 [Pipeline] // retry 00:00:12.519 [Pipeline] sh 00:00:12.797 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:12.814 [Pipeline] httpRequest 00:00:13.190 [Pipeline] echo 00:00:13.191 Sorcerer 10.211.164.20 is alive 00:00:13.198 [Pipeline] retry 00:00:13.199 [Pipeline] { 00:00:13.212 [Pipeline] httpRequest 00:00:13.216 HttpMethod: GET 00:00:13.217 URL: http://10.211.164.20/packages/spdk_cec5ba284b55d19c90359936d77b707e398829f7.tar.gz 00:00:13.218 Sending request to url: http://10.211.164.20/packages/spdk_cec5ba284b55d19c90359936d77b707e398829f7.tar.gz 00:00:13.219 Response Code: HTTP/1.1 404 Not Found 00:00:13.220 Success: Status code 404 is in the accepted range: 200,404 00:00:13.220 Saving response body to /var/jenkins/workspace/raid-vg-autotest/spdk_cec5ba284b55d19c90359936d77b707e398829f7.tar.gz 00:00:13.225 [Pipeline] } 00:00:13.240 [Pipeline] // retry 00:00:13.246 [Pipeline] sh 00:00:13.532 + rm -f spdk_cec5ba284b55d19c90359936d77b707e398829f7.tar.gz 00:00:13.546 [Pipeline] retry 00:00:13.548 [Pipeline] { 00:00:13.569 [Pipeline] checkout 00:00:13.576 The recommended git tool is: NONE 00:00:13.586 using credential 00000000-0000-0000-0000-000000000002 00:00:13.587 Wiping out workspace first. 00:00:13.595 Cloning the remote Git repository 00:00:13.598 Honoring refspec on initial clone 00:00:13.601 Cloning repository https://review.spdk.io/gerrit/a/spdk/spdk 00:00:13.601 > git init /var/jenkins/workspace/raid-vg-autotest/spdk # timeout=10 00:00:13.627 Using reference repository: /var/ci_repos/spdk_multi 00:00:13.628 Fetching upstream changes from https://review.spdk.io/gerrit/a/spdk/spdk 00:00:13.628 > git --version # timeout=10 00:00:13.632 > git --version # 'git version 2.25.1' 00:00:13.632 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:13.636 Setting http proxy: proxy-dmz.intel.com:911 00:00:13.636 > git fetch --tags --force --progress -- https://review.spdk.io/gerrit/a/spdk/spdk refs/changes/09/24709/23 +refs/heads/master:refs/remotes/origin/master # timeout=10 00:00:35.263 Avoid second fetch 00:00:35.281 Checking out Revision cec5ba284b55d19c90359936d77b707e398829f7 (FETCH_HEAD) 00:00:35.559 Commit message: "nvme/rdma: Register UMR per IO request" 00:00:35.566 First time build. Skipping changelog. 00:00:35.241 > git config remote.origin.url https://review.spdk.io/gerrit/a/spdk/spdk # timeout=10 00:00:35.245 > git config --add remote.origin.fetch refs/changes/09/24709/23 # timeout=10 00:00:35.250 > git config --add remote.origin.fetch +refs/heads/master:refs/remotes/origin/master # timeout=10 00:00:35.265 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:35.274 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:35.283 > git config core.sparsecheckout # timeout=10 00:00:35.286 > git checkout -f cec5ba284b55d19c90359936d77b707e398829f7 # timeout=10 00:00:35.561 > git rev-list --no-walk e576aacafae0a7d34c9eefcd66f049c5a6213081 # timeout=10 00:00:35.571 > git remote # timeout=10 00:00:35.573 > git submodule init # timeout=10 00:00:35.630 > git submodule sync # timeout=10 00:00:35.686 > git config --get remote.origin.url # timeout=10 00:00:35.694 > git submodule init # timeout=10 00:00:35.747 > git config -f .gitmodules --get-regexp ^submodule\.(.+)\.url # timeout=10 00:00:35.752 > git config --get submodule.dpdk.url # timeout=10 00:00:35.756 > git remote # timeout=10 00:00:35.760 > git config --get remote.origin.url # timeout=10 00:00:35.764 > git config -f .gitmodules --get submodule.dpdk.path # timeout=10 00:00:35.767 > git config --get submodule.intel-ipsec-mb.url # timeout=10 00:00:35.771 > git remote # timeout=10 00:00:35.775 > git config --get remote.origin.url # timeout=10 00:00:35.778 > git config -f .gitmodules --get submodule.intel-ipsec-mb.path # timeout=10 00:00:35.782 > git config --get submodule.isa-l.url # timeout=10 00:00:35.786 > git remote # timeout=10 00:00:35.790 > git config --get remote.origin.url # timeout=10 00:00:35.794 > git config -f .gitmodules --get submodule.isa-l.path # timeout=10 00:00:35.798 > git config --get submodule.ocf.url # timeout=10 00:00:35.803 > git remote # timeout=10 00:00:35.807 > git config --get remote.origin.url # timeout=10 00:00:35.811 > git config -f .gitmodules --get submodule.ocf.path # timeout=10 00:00:35.815 > git config --get submodule.libvfio-user.url # timeout=10 00:00:35.818 > git remote # timeout=10 00:00:35.822 > git config --get remote.origin.url # timeout=10 00:00:35.826 > git config -f .gitmodules --get submodule.libvfio-user.path # timeout=10 00:00:35.829 > git config --get submodule.xnvme.url # timeout=10 00:00:35.833 > git remote # timeout=10 00:00:35.837 > git config --get remote.origin.url # timeout=10 00:00:35.841 > git config -f .gitmodules --get submodule.xnvme.path # timeout=10 00:00:35.845 > git config --get submodule.isa-l-crypto.url # timeout=10 00:00:35.848 > git remote # timeout=10 00:00:35.852 > git config --get remote.origin.url # timeout=10 00:00:35.856 > git config -f .gitmodules --get submodule.isa-l-crypto.path # timeout=10 00:00:35.862 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:35.862 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:35.862 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:35.862 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:35.862 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:35.862 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:35.862 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:35.867 Setting http proxy: proxy-dmz.intel.com:911 00:00:35.867 > git submodule update --init --recursive --reference /var/ci_repos/spdk_multi ocf # timeout=10 00:00:35.867 Setting http proxy: proxy-dmz.intel.com:911 00:00:35.867 Setting http proxy: proxy-dmz.intel.com:911 00:00:35.867 > git submodule update --init --recursive --reference /var/ci_repos/spdk_multi xnvme # timeout=10 00:00:35.867 Setting http proxy: proxy-dmz.intel.com:911 00:00:35.867 > git submodule update --init --recursive --reference /var/ci_repos/spdk_multi libvfio-user # timeout=10 00:00:35.868 > git submodule update --init --recursive --reference /var/ci_repos/spdk_multi isa-l # timeout=10 00:00:35.868 Setting http proxy: proxy-dmz.intel.com:911 00:00:35.868 Setting http proxy: proxy-dmz.intel.com:911 00:00:35.868 > git submodule update --init --recursive --reference /var/ci_repos/spdk_multi intel-ipsec-mb # timeout=10 00:00:35.868 Setting http proxy: proxy-dmz.intel.com:911 00:00:35.868 > git submodule update --init --recursive --reference /var/ci_repos/spdk_multi isa-l-crypto # timeout=10 00:00:35.868 > git submodule update --init --recursive --reference /var/ci_repos/spdk_multi dpdk # timeout=10 00:01:20.848 [Pipeline] dir 00:01:20.849 Running in /var/jenkins/workspace/raid-vg-autotest/spdk 00:01:20.850 [Pipeline] { 00:01:20.865 [Pipeline] sh 00:01:21.151 ++ nproc 00:01:21.151 + threads=80 00:01:21.151 + git repack -a -d --threads=80 00:01:29.281 + git submodule foreach git repack -a -d --threads=80 00:01:29.281 Entering 'dpdk' 00:01:31.817 Entering 'intel-ipsec-mb' 00:01:32.076 Entering 'isa-l' 00:01:32.336 Entering 'isa-l-crypto' 00:01:32.336 Entering 'libvfio-user' 00:01:32.336 Entering 'ocf' 00:01:32.595 Entering 'xnvme' 00:01:32.855 + find .git -type f -name alternates -print -delete 00:01:32.855 .git/modules/libvfio-user/objects/info/alternates 00:01:32.855 .git/modules/isa-l-crypto/objects/info/alternates 00:01:32.855 .git/modules/ocf/objects/info/alternates 00:01:32.855 .git/modules/intel-ipsec-mb/objects/info/alternates 00:01:32.855 .git/modules/isa-l/objects/info/alternates 00:01:32.855 .git/modules/xnvme/objects/info/alternates 00:01:32.855 .git/modules/dpdk/objects/info/alternates 00:01:32.855 .git/objects/info/alternates 00:01:32.866 [Pipeline] } 00:01:32.882 [Pipeline] // dir 00:01:32.888 [Pipeline] } 00:01:32.904 [Pipeline] // retry 00:01:32.912 [Pipeline] sh 00:01:33.215 + hash pigz 00:01:33.215 + tar -czf spdk_cec5ba284b55d19c90359936d77b707e398829f7.tar.gz spdk 00:01:48.192 [Pipeline] retry 00:01:48.193 [Pipeline] { 00:01:48.211 [Pipeline] httpRequest 00:01:48.231 HttpMethod: PUT 00:01:48.231 URL: http://10.211.164.20/cgi-bin/sorcerer.py?group=packages&filename=spdk_cec5ba284b55d19c90359936d77b707e398829f7.tar.gz 00:01:48.232 Sending request to url: http://10.211.164.20/cgi-bin/sorcerer.py?group=packages&filename=spdk_cec5ba284b55d19c90359936d77b707e398829f7.tar.gz 00:02:04.230 Response Code: HTTP/1.1 200 OK 00:02:04.240 Success: Status code 200 is in the accepted range: 200 00:02:04.243 [Pipeline] } 00:02:04.263 [Pipeline] // retry 00:02:04.272 [Pipeline] echo 00:02:04.274 00:02:04.275 Locking 00:02:04.275 Waited 12s for lock 00:02:04.275 File already exists: /storage/packages/spdk_cec5ba284b55d19c90359936d77b707e398829f7.tar.gz 00:02:04.275 00:02:04.280 [Pipeline] sh 00:02:04.566 + git -C spdk log --oneline -n5 00:02:04.566 cec5ba284 nvme/rdma: Register UMR per IO request 00:02:04.566 7219bd1a7 thread: use extended version of fd group add 00:02:04.566 1a5bdab32 event: use extended version of fd group add 00:02:04.566 92d1e663a bdev/nvme: Fix depopulating a namespace twice 00:02:04.566 52a413487 bdev: do not retry nomem I/Os during aborting them 00:02:04.588 [Pipeline] writeFile 00:02:04.645 [Pipeline] sh 00:02:04.931 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:02:04.944 [Pipeline] sh 00:02:05.326 + cat autorun-spdk.conf 00:02:05.326 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:05.326 SPDK_RUN_ASAN=1 00:02:05.326 SPDK_RUN_UBSAN=1 00:02:05.326 SPDK_TEST_RAID=1 00:02:05.326 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:05.333 RUN_NIGHTLY=0 00:02:05.335 [Pipeline] } 00:02:05.349 [Pipeline] // stage 00:02:05.370 [Pipeline] stage 00:02:05.372 [Pipeline] { (Run VM) 00:02:05.388 [Pipeline] sh 00:02:05.672 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:02:05.672 + echo 'Start stage prepare_nvme.sh' 00:02:05.672 Start stage prepare_nvme.sh 00:02:05.672 + [[ -n 6 ]] 00:02:05.672 + disk_prefix=ex6 00:02:05.672 + [[ -n /var/jenkins/workspace/raid-vg-autotest ]] 00:02:05.672 + [[ -e /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf ]] 00:02:05.672 + source /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf 00:02:05.672 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:05.672 ++ SPDK_RUN_ASAN=1 00:02:05.672 ++ SPDK_RUN_UBSAN=1 00:02:05.672 ++ SPDK_TEST_RAID=1 00:02:05.672 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:05.672 ++ RUN_NIGHTLY=0 00:02:05.672 + cd /var/jenkins/workspace/raid-vg-autotest 00:02:05.672 + nvme_files=() 00:02:05.672 + declare -A nvme_files 00:02:05.672 + backend_dir=/var/lib/libvirt/images/backends 00:02:05.672 + nvme_files['nvme.img']=5G 00:02:05.672 + nvme_files['nvme-cmb.img']=5G 00:02:05.672 + nvme_files['nvme-multi0.img']=4G 00:02:05.672 + nvme_files['nvme-multi1.img']=4G 00:02:05.673 + nvme_files['nvme-multi2.img']=4G 00:02:05.673 + nvme_files['nvme-openstack.img']=8G 00:02:05.673 + nvme_files['nvme-zns.img']=5G 00:02:05.673 + (( SPDK_TEST_NVME_PMR == 1 )) 00:02:05.673 + (( SPDK_TEST_FTL == 1 )) 00:02:05.673 + (( SPDK_TEST_NVME_FDP == 1 )) 00:02:05.673 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:02:05.673 + for nvme in "${!nvme_files[@]}" 00:02:05.673 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-multi2.img -s 4G 00:02:05.673 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:02:05.673 + for nvme in "${!nvme_files[@]}" 00:02:05.673 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-cmb.img -s 5G 00:02:05.673 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:02:05.673 + for nvme in "${!nvme_files[@]}" 00:02:05.673 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-openstack.img -s 8G 00:02:05.673 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:02:05.673 + for nvme in "${!nvme_files[@]}" 00:02:05.673 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-zns.img -s 5G 00:02:05.673 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:02:05.673 + for nvme in "${!nvme_files[@]}" 00:02:05.673 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-multi1.img -s 4G 00:02:05.673 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:02:05.673 + for nvme in "${!nvme_files[@]}" 00:02:05.673 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-multi0.img -s 4G 00:02:05.673 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:02:05.673 + for nvme in "${!nvme_files[@]}" 00:02:05.673 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme.img -s 5G 00:02:05.932 Formatting '/var/lib/libvirt/images/backends/ex6-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:02:05.932 ++ sudo grep -rl ex6-nvme.img /etc/libvirt/qemu 00:02:05.932 + echo 'End stage prepare_nvme.sh' 00:02:05.932 End stage prepare_nvme.sh 00:02:05.945 [Pipeline] sh 00:02:06.230 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:02:06.230 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 -b /var/lib/libvirt/images/backends/ex6-nvme.img -b /var/lib/libvirt/images/backends/ex6-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex6-nvme-multi1.img:/var/lib/libvirt/images/backends/ex6-nvme-multi2.img -H -a -v -f fedora39 00:02:06.230 00:02:06.230 DIR=/var/jenkins/workspace/raid-vg-autotest/spdk/scripts/vagrant 00:02:06.230 SPDK_DIR=/var/jenkins/workspace/raid-vg-autotest/spdk 00:02:06.230 VAGRANT_TARGET=/var/jenkins/workspace/raid-vg-autotest 00:02:06.230 HELP=0 00:02:06.230 DRY_RUN=0 00:02:06.230 NVME_FILE=/var/lib/libvirt/images/backends/ex6-nvme.img,/var/lib/libvirt/images/backends/ex6-nvme-multi0.img, 00:02:06.230 NVME_DISKS_TYPE=nvme,nvme, 00:02:06.230 NVME_AUTO_CREATE=0 00:02:06.230 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex6-nvme-multi1.img:/var/lib/libvirt/images/backends/ex6-nvme-multi2.img, 00:02:06.230 NVME_CMB=,, 00:02:06.230 NVME_PMR=,, 00:02:06.230 NVME_ZNS=,, 00:02:06.230 NVME_MS=,, 00:02:06.230 NVME_FDP=,, 00:02:06.230 SPDK_VAGRANT_DISTRO=fedora39 00:02:06.230 SPDK_VAGRANT_VMCPU=10 00:02:06.230 SPDK_VAGRANT_VMRAM=12288 00:02:06.230 SPDK_VAGRANT_PROVIDER=libvirt 00:02:06.230 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:02:06.230 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:02:06.230 SPDK_OPENSTACK_NETWORK=0 00:02:06.230 VAGRANT_PACKAGE_BOX=0 00:02:06.230 VAGRANTFILE=/var/jenkins/workspace/raid-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:02:06.230 FORCE_DISTRO=true 00:02:06.230 VAGRANT_BOX_VERSION= 00:02:06.230 EXTRA_VAGRANTFILES= 00:02:06.230 NIC_MODEL=virtio 00:02:06.230 00:02:06.230 mkdir: created directory '/var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt' 00:02:06.230 /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt /var/jenkins/workspace/raid-vg-autotest 00:02:08.769 Bringing machine 'default' up with 'libvirt' provider... 00:02:09.704 ==> default: Creating image (snapshot of base box volume). 00:02:09.704 ==> default: Creating domain with the following settings... 00:02:09.704 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1733866150_93fc84ff46ea3298ea6a 00:02:09.704 ==> default: -- Domain type: kvm 00:02:09.704 ==> default: -- Cpus: 10 00:02:09.704 ==> default: -- Feature: acpi 00:02:09.704 ==> default: -- Feature: apic 00:02:09.704 ==> default: -- Feature: pae 00:02:09.704 ==> default: -- Memory: 12288M 00:02:09.704 ==> default: -- Memory Backing: hugepages: 00:02:09.704 ==> default: -- Management MAC: 00:02:09.704 ==> default: -- Loader: 00:02:09.704 ==> default: -- Nvram: 00:02:09.704 ==> default: -- Base box: spdk/fedora39 00:02:09.704 ==> default: -- Storage pool: default 00:02:09.704 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1733866150_93fc84ff46ea3298ea6a.img (20G) 00:02:09.704 ==> default: -- Volume Cache: default 00:02:09.704 ==> default: -- Kernel: 00:02:09.704 ==> default: -- Initrd: 00:02:09.704 ==> default: -- Graphics Type: vnc 00:02:09.704 ==> default: -- Graphics Port: -1 00:02:09.704 ==> default: -- Graphics IP: 127.0.0.1 00:02:09.704 ==> default: -- Graphics Password: Not defined 00:02:09.704 ==> default: -- Video Type: cirrus 00:02:09.704 ==> default: -- Video VRAM: 9216 00:02:09.704 ==> default: -- Sound Type: 00:02:09.704 ==> default: -- Keymap: en-us 00:02:09.704 ==> default: -- TPM Path: 00:02:09.704 ==> default: -- INPUT: type=mouse, bus=ps2 00:02:09.704 ==> default: -- Command line args: 00:02:09.704 ==> default: -> value=-device, 00:02:09.704 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:02:09.704 ==> default: -> value=-drive, 00:02:09.704 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme.img,if=none,id=nvme-0-drive0, 00:02:09.704 ==> default: -> value=-device, 00:02:09.704 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:09.704 ==> default: -> value=-device, 00:02:09.704 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:02:09.704 ==> default: -> value=-drive, 00:02:09.704 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:02:09.704 ==> default: -> value=-device, 00:02:09.704 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:09.704 ==> default: -> value=-drive, 00:02:09.704 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:02:09.704 ==> default: -> value=-device, 00:02:09.704 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:09.704 ==> default: -> value=-drive, 00:02:09.704 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:02:09.704 ==> default: -> value=-device, 00:02:09.704 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:09.704 ==> default: Creating shared folders metadata... 00:02:09.704 ==> default: Starting domain. 00:02:11.080 ==> default: Waiting for domain to get an IP address... 00:02:29.167 ==> default: Waiting for SSH to become available... 00:02:29.167 ==> default: Configuring and enabling network interfaces... 00:02:32.458 default: SSH address: 192.168.121.65:22 00:02:32.458 default: SSH username: vagrant 00:02:32.458 default: SSH auth method: private key 00:02:34.996 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:02:45.002 ==> default: Mounting SSHFS shared folder... 00:02:45.940 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:02:45.940 ==> default: Checking Mount.. 00:02:47.318 ==> default: Folder Successfully Mounted! 00:02:47.318 ==> default: Running provisioner: file... 00:02:48.698 default: ~/.gitconfig => .gitconfig 00:02:48.957 00:02:48.957 SUCCESS! 00:02:48.957 00:02:48.957 cd to /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:02:48.957 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:48.957 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:02:48.957 00:02:48.965 [Pipeline] } 00:02:48.981 [Pipeline] // stage 00:02:48.990 [Pipeline] dir 00:02:48.990 Running in /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt 00:02:48.992 [Pipeline] { 00:02:49.005 [Pipeline] catchError 00:02:49.007 [Pipeline] { 00:02:49.020 [Pipeline] sh 00:02:49.303 + + sed -ne /^Host/,$p 00:02:49.303 vagrant ssh-config --host vagrant 00:02:49.303 + tee ssh_conf 00:02:52.590 Host vagrant 00:02:52.590 HostName 192.168.121.65 00:02:52.590 User vagrant 00:02:52.590 Port 22 00:02:52.590 UserKnownHostsFile /dev/null 00:02:52.590 StrictHostKeyChecking no 00:02:52.590 PasswordAuthentication no 00:02:52.590 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:02:52.590 IdentitiesOnly yes 00:02:52.590 LogLevel FATAL 00:02:52.590 ForwardAgent yes 00:02:52.590 ForwardX11 yes 00:02:52.590 00:02:52.605 [Pipeline] withEnv 00:02:52.607 [Pipeline] { 00:02:52.622 [Pipeline] sh 00:02:52.903 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:02:52.903 source /etc/os-release 00:02:52.903 [[ -e /image.version ]] && img=$(< /image.version) 00:02:52.903 # Minimal, systemd-like check. 00:02:52.903 if [[ -e /.dockerenv ]]; then 00:02:52.903 # Clear garbage from the node's name: 00:02:52.903 # agt-er_autotest_547-896 -> autotest_547-896 00:02:52.903 # $HOSTNAME is the actual container id 00:02:52.903 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:52.903 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:52.903 # We can assume this is a mount from a host where container is running, 00:02:52.903 # so fetch its hostname to easily identify the target swarm worker. 00:02:52.903 container="$(< /etc/hostname) ($agent)" 00:02:52.903 else 00:02:52.903 # Fallback 00:02:52.903 container=$agent 00:02:52.903 fi 00:02:52.903 fi 00:02:52.903 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:52.903 00:02:53.174 [Pipeline] } 00:02:53.191 [Pipeline] // withEnv 00:02:53.200 [Pipeline] setCustomBuildProperty 00:02:53.217 [Pipeline] stage 00:02:53.219 [Pipeline] { (Tests) 00:02:53.236 [Pipeline] sh 00:02:53.519 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:53.793 [Pipeline] sh 00:02:54.076 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:54.347 [Pipeline] timeout 00:02:54.347 Timeout set to expire in 1 hr 30 min 00:02:54.349 [Pipeline] { 00:02:54.360 [Pipeline] sh 00:02:54.637 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:02:55.203 HEAD is now at cec5ba284 nvme/rdma: Register UMR per IO request 00:02:55.216 [Pipeline] sh 00:02:55.498 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:02:55.816 [Pipeline] sh 00:02:56.099 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:56.373 [Pipeline] sh 00:02:56.652 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=raid-vg-autotest ./autoruner.sh spdk_repo 00:02:56.911 ++ readlink -f spdk_repo 00:02:56.911 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:56.911 + [[ -n /home/vagrant/spdk_repo ]] 00:02:56.911 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:56.911 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:56.911 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:56.911 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:56.911 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:56.911 + [[ raid-vg-autotest == pkgdep-* ]] 00:02:56.911 + cd /home/vagrant/spdk_repo 00:02:56.911 + source /etc/os-release 00:02:56.911 ++ NAME='Fedora Linux' 00:02:56.911 ++ VERSION='39 (Cloud Edition)' 00:02:56.911 ++ ID=fedora 00:02:56.911 ++ VERSION_ID=39 00:02:56.911 ++ VERSION_CODENAME= 00:02:56.911 ++ PLATFORM_ID=platform:f39 00:02:56.911 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:56.911 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:56.911 ++ LOGO=fedora-logo-icon 00:02:56.911 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:56.911 ++ HOME_URL=https://fedoraproject.org/ 00:02:56.911 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:56.911 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:56.911 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:56.911 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:56.911 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:56.911 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:56.911 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:56.911 ++ SUPPORT_END=2024-11-12 00:02:56.911 ++ VARIANT='Cloud Edition' 00:02:56.911 ++ VARIANT_ID=cloud 00:02:56.911 + uname -a 00:02:56.911 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:56.911 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:57.478 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:02:57.478 Hugepages 00:02:57.478 node hugesize free / total 00:02:57.478 node0 1048576kB 0 / 0 00:02:57.478 node0 2048kB 0 / 0 00:02:57.478 00:02:57.478 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:57.478 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:57.478 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:57.478 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:02:57.478 + rm -f /tmp/spdk-ld-path 00:02:57.478 + source autorun-spdk.conf 00:02:57.478 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:57.478 ++ SPDK_RUN_ASAN=1 00:02:57.478 ++ SPDK_RUN_UBSAN=1 00:02:57.478 ++ SPDK_TEST_RAID=1 00:02:57.478 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:57.478 ++ RUN_NIGHTLY=0 00:02:57.478 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:57.478 + [[ -n '' ]] 00:02:57.478 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:57.478 + for M in /var/spdk/build-*-manifest.txt 00:02:57.478 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:57.478 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:57.478 + for M in /var/spdk/build-*-manifest.txt 00:02:57.478 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:57.478 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:57.478 + for M in /var/spdk/build-*-manifest.txt 00:02:57.478 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:57.478 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:57.478 ++ uname 00:02:57.478 + [[ Linux == \L\i\n\u\x ]] 00:02:57.478 + sudo dmesg -T 00:02:57.737 + sudo dmesg --clear 00:02:57.737 + dmesg_pid=5421 00:02:57.737 + sudo dmesg -Tw 00:02:57.737 + [[ Fedora Linux == FreeBSD ]] 00:02:57.737 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:57.737 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:57.737 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:57.737 + [[ -x /usr/src/fio-static/fio ]] 00:02:57.737 + export FIO_BIN=/usr/src/fio-static/fio 00:02:57.737 + FIO_BIN=/usr/src/fio-static/fio 00:02:57.737 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:57.737 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:57.737 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:57.737 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:57.737 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:57.737 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:57.737 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:57.737 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:57.737 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:57.737 21:29:58 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:02:57.737 21:29:58 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:57.737 21:29:58 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:57.737 21:29:58 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_RUN_ASAN=1 00:02:57.737 21:29:58 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_RUN_UBSAN=1 00:02:57.737 21:29:58 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_RAID=1 00:02:57.737 21:29:58 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:57.737 21:29:58 -- spdk_repo/autorun-spdk.conf@6 -- $ RUN_NIGHTLY=0 00:02:57.737 21:29:58 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:02:57.737 21:29:58 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:57.737 21:29:58 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:02:57.737 21:29:58 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:57.737 21:29:58 -- scripts/common.sh@15 -- $ shopt -s extglob 00:02:57.737 21:29:58 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:57.737 21:29:58 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:57.737 21:29:58 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:57.737 21:29:58 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:57.737 21:29:58 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:57.737 21:29:58 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:57.737 21:29:58 -- paths/export.sh@5 -- $ export PATH 00:02:57.737 21:29:58 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:57.737 21:29:58 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:57.737 21:29:58 -- common/autobuild_common.sh@493 -- $ date +%s 00:02:57.737 21:29:58 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1733866198.XXXXXX 00:02:57.737 21:29:58 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1733866198.m6hLJe 00:02:57.737 21:29:58 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:02:57.737 21:29:58 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:02:57.737 21:29:58 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:02:57.737 21:29:58 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:57.737 21:29:58 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:57.997 21:29:58 -- common/autobuild_common.sh@509 -- $ get_config_params 00:02:57.997 21:29:58 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:02:57.997 21:29:58 -- common/autotest_common.sh@10 -- $ set +x 00:02:57.997 21:29:58 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f' 00:02:57.997 21:29:58 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:02:57.997 21:29:58 -- pm/common@17 -- $ local monitor 00:02:57.997 21:29:58 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:57.997 21:29:58 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:57.997 21:29:58 -- pm/common@21 -- $ date +%s 00:02:57.997 21:29:58 -- pm/common@25 -- $ sleep 1 00:02:57.997 21:29:58 -- pm/common@21 -- $ date +%s 00:02:57.997 21:29:58 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1733866198 00:02:57.997 21:29:58 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1733866198 00:02:57.997 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1733866198_collect-cpu-load.pm.log 00:02:57.997 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1733866198_collect-vmstat.pm.log 00:02:59.049 21:29:59 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:02:59.049 21:29:59 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:59.049 21:29:59 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:59.049 21:29:59 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:59.049 21:29:59 -- spdk/autobuild.sh@16 -- $ date -u 00:02:59.049 Tue Dec 10 09:29:59 PM UTC 2024 00:02:59.049 21:29:59 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:59.049 v25.01-pre-328-gcec5ba284 00:02:59.049 21:29:59 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:02:59.049 21:29:59 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:02:59.049 21:29:59 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:59.049 21:29:59 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:59.049 21:29:59 -- common/autotest_common.sh@10 -- $ set +x 00:02:59.049 ************************************ 00:02:59.049 START TEST asan 00:02:59.050 ************************************ 00:02:59.050 using asan 00:02:59.050 21:29:59 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 00:02:59.050 00:02:59.050 real 0m0.001s 00:02:59.050 user 0m0.001s 00:02:59.050 sys 0m0.000s 00:02:59.050 21:29:59 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:59.050 21:29:59 asan -- common/autotest_common.sh@10 -- $ set +x 00:02:59.050 ************************************ 00:02:59.050 END TEST asan 00:02:59.050 ************************************ 00:02:59.050 21:29:59 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:59.050 21:29:59 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:59.050 21:29:59 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:59.050 21:29:59 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:59.050 21:29:59 -- common/autotest_common.sh@10 -- $ set +x 00:02:59.050 ************************************ 00:02:59.050 START TEST ubsan 00:02:59.050 ************************************ 00:02:59.050 using ubsan 00:02:59.050 21:29:59 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:02:59.050 00:02:59.050 real 0m0.000s 00:02:59.050 user 0m0.000s 00:02:59.050 sys 0m0.000s 00:02:59.050 21:29:59 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:59.050 21:29:59 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:59.050 ************************************ 00:02:59.050 END TEST ubsan 00:02:59.050 ************************************ 00:02:59.050 21:29:59 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:02:59.050 21:29:59 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:59.050 21:29:59 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:59.050 21:29:59 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:59.050 21:29:59 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:59.050 21:29:59 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:59.050 21:29:59 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:59.050 21:29:59 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:59.050 21:29:59 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f --with-shared 00:02:59.334 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:02:59.334 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:59.594 Using 'verbs' RDMA provider 00:03:15.852 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:03:30.746 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:03:30.746 Creating mk/config.mk...done. 00:03:30.746 Creating mk/cc.flags.mk...done. 00:03:30.746 Type 'make' to build. 00:03:30.746 21:30:30 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:03:30.746 21:30:30 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:03:30.746 21:30:30 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:03:30.746 21:30:30 -- common/autotest_common.sh@10 -- $ set +x 00:03:30.746 ************************************ 00:03:30.746 START TEST make 00:03:30.746 ************************************ 00:03:30.746 21:30:30 make -- common/autotest_common.sh@1129 -- $ make -j10 00:03:30.746 make[1]: Nothing to be done for 'all'. 00:03:42.953 The Meson build system 00:03:42.953 Version: 1.5.0 00:03:42.953 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:03:42.953 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:03:42.953 Build type: native build 00:03:42.953 Program cat found: YES (/usr/bin/cat) 00:03:42.953 Project name: DPDK 00:03:42.953 Project version: 24.03.0 00:03:42.953 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:03:42.953 C linker for the host machine: cc ld.bfd 2.40-14 00:03:42.953 Host machine cpu family: x86_64 00:03:42.953 Host machine cpu: x86_64 00:03:42.953 Message: ## Building in Developer Mode ## 00:03:42.953 Program pkg-config found: YES (/usr/bin/pkg-config) 00:03:42.953 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:03:42.953 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:03:42.953 Program python3 found: YES (/usr/bin/python3) 00:03:42.953 Program cat found: YES (/usr/bin/cat) 00:03:42.953 Compiler for C supports arguments -march=native: YES 00:03:42.953 Checking for size of "void *" : 8 00:03:42.953 Checking for size of "void *" : 8 (cached) 00:03:42.953 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:03:42.953 Library m found: YES 00:03:42.953 Library numa found: YES 00:03:42.953 Has header "numaif.h" : YES 00:03:42.953 Library fdt found: NO 00:03:42.953 Library execinfo found: NO 00:03:42.953 Has header "execinfo.h" : YES 00:03:42.953 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:03:42.953 Run-time dependency libarchive found: NO (tried pkgconfig) 00:03:42.953 Run-time dependency libbsd found: NO (tried pkgconfig) 00:03:42.953 Run-time dependency jansson found: NO (tried pkgconfig) 00:03:42.953 Run-time dependency openssl found: YES 3.1.1 00:03:42.953 Run-time dependency libpcap found: YES 1.10.4 00:03:42.953 Has header "pcap.h" with dependency libpcap: YES 00:03:42.953 Compiler for C supports arguments -Wcast-qual: YES 00:03:42.953 Compiler for C supports arguments -Wdeprecated: YES 00:03:42.953 Compiler for C supports arguments -Wformat: YES 00:03:42.953 Compiler for C supports arguments -Wformat-nonliteral: NO 00:03:42.953 Compiler for C supports arguments -Wformat-security: NO 00:03:42.953 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:42.953 Compiler for C supports arguments -Wmissing-prototypes: YES 00:03:42.953 Compiler for C supports arguments -Wnested-externs: YES 00:03:42.953 Compiler for C supports arguments -Wold-style-definition: YES 00:03:42.953 Compiler for C supports arguments -Wpointer-arith: YES 00:03:42.953 Compiler for C supports arguments -Wsign-compare: YES 00:03:42.953 Compiler for C supports arguments -Wstrict-prototypes: YES 00:03:42.953 Compiler for C supports arguments -Wundef: YES 00:03:42.953 Compiler for C supports arguments -Wwrite-strings: YES 00:03:42.953 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:03:42.953 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:03:42.953 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:42.953 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:03:42.954 Program objdump found: YES (/usr/bin/objdump) 00:03:42.954 Compiler for C supports arguments -mavx512f: YES 00:03:42.954 Checking if "AVX512 checking" compiles: YES 00:03:42.954 Fetching value of define "__SSE4_2__" : 1 00:03:42.954 Fetching value of define "__AES__" : 1 00:03:42.954 Fetching value of define "__AVX__" : 1 00:03:42.954 Fetching value of define "__AVX2__" : 1 00:03:42.954 Fetching value of define "__AVX512BW__" : 1 00:03:42.954 Fetching value of define "__AVX512CD__" : 1 00:03:42.954 Fetching value of define "__AVX512DQ__" : 1 00:03:42.954 Fetching value of define "__AVX512F__" : 1 00:03:42.954 Fetching value of define "__AVX512VL__" : 1 00:03:42.954 Fetching value of define "__PCLMUL__" : 1 00:03:42.954 Fetching value of define "__RDRND__" : 1 00:03:42.954 Fetching value of define "__RDSEED__" : 1 00:03:42.954 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:03:42.954 Fetching value of define "__znver1__" : (undefined) 00:03:42.954 Fetching value of define "__znver2__" : (undefined) 00:03:42.954 Fetching value of define "__znver3__" : (undefined) 00:03:42.954 Fetching value of define "__znver4__" : (undefined) 00:03:42.954 Library asan found: YES 00:03:42.954 Compiler for C supports arguments -Wno-format-truncation: YES 00:03:42.954 Message: lib/log: Defining dependency "log" 00:03:42.954 Message: lib/kvargs: Defining dependency "kvargs" 00:03:42.954 Message: lib/telemetry: Defining dependency "telemetry" 00:03:42.954 Library rt found: YES 00:03:42.954 Checking for function "getentropy" : NO 00:03:42.954 Message: lib/eal: Defining dependency "eal" 00:03:42.954 Message: lib/ring: Defining dependency "ring" 00:03:42.954 Message: lib/rcu: Defining dependency "rcu" 00:03:42.954 Message: lib/mempool: Defining dependency "mempool" 00:03:42.954 Message: lib/mbuf: Defining dependency "mbuf" 00:03:42.954 Fetching value of define "__PCLMUL__" : 1 (cached) 00:03:42.954 Fetching value of define "__AVX512F__" : 1 (cached) 00:03:42.954 Fetching value of define "__AVX512BW__" : 1 (cached) 00:03:42.954 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:03:42.954 Fetching value of define "__AVX512VL__" : 1 (cached) 00:03:42.954 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:03:42.954 Compiler for C supports arguments -mpclmul: YES 00:03:42.954 Compiler for C supports arguments -maes: YES 00:03:42.954 Compiler for C supports arguments -mavx512f: YES (cached) 00:03:42.954 Compiler for C supports arguments -mavx512bw: YES 00:03:42.954 Compiler for C supports arguments -mavx512dq: YES 00:03:42.954 Compiler for C supports arguments -mavx512vl: YES 00:03:42.954 Compiler for C supports arguments -mvpclmulqdq: YES 00:03:42.954 Compiler for C supports arguments -mavx2: YES 00:03:42.954 Compiler for C supports arguments -mavx: YES 00:03:42.954 Message: lib/net: Defining dependency "net" 00:03:42.954 Message: lib/meter: Defining dependency "meter" 00:03:42.954 Message: lib/ethdev: Defining dependency "ethdev" 00:03:42.954 Message: lib/pci: Defining dependency "pci" 00:03:42.954 Message: lib/cmdline: Defining dependency "cmdline" 00:03:42.954 Message: lib/hash: Defining dependency "hash" 00:03:42.954 Message: lib/timer: Defining dependency "timer" 00:03:42.954 Message: lib/compressdev: Defining dependency "compressdev" 00:03:42.954 Message: lib/cryptodev: Defining dependency "cryptodev" 00:03:42.954 Message: lib/dmadev: Defining dependency "dmadev" 00:03:42.954 Compiler for C supports arguments -Wno-cast-qual: YES 00:03:42.954 Message: lib/power: Defining dependency "power" 00:03:42.954 Message: lib/reorder: Defining dependency "reorder" 00:03:42.954 Message: lib/security: Defining dependency "security" 00:03:42.954 Has header "linux/userfaultfd.h" : YES 00:03:42.954 Has header "linux/vduse.h" : YES 00:03:42.954 Message: lib/vhost: Defining dependency "vhost" 00:03:42.954 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:03:42.954 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:03:42.954 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:03:42.954 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:03:42.954 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:03:42.954 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:03:42.954 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:03:42.954 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:03:42.954 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:03:42.954 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:03:42.954 Program doxygen found: YES (/usr/local/bin/doxygen) 00:03:42.954 Configuring doxy-api-html.conf using configuration 00:03:42.954 Configuring doxy-api-man.conf using configuration 00:03:42.954 Program mandb found: YES (/usr/bin/mandb) 00:03:42.954 Program sphinx-build found: NO 00:03:42.954 Configuring rte_build_config.h using configuration 00:03:42.954 Message: 00:03:42.954 ================= 00:03:42.954 Applications Enabled 00:03:42.954 ================= 00:03:42.954 00:03:42.954 apps: 00:03:42.954 00:03:42.954 00:03:42.954 Message: 00:03:42.954 ================= 00:03:42.954 Libraries Enabled 00:03:42.954 ================= 00:03:42.954 00:03:42.954 libs: 00:03:42.954 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:03:42.954 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:03:42.954 cryptodev, dmadev, power, reorder, security, vhost, 00:03:42.954 00:03:42.954 Message: 00:03:42.954 =============== 00:03:42.954 Drivers Enabled 00:03:42.954 =============== 00:03:42.954 00:03:42.954 common: 00:03:42.954 00:03:42.954 bus: 00:03:42.954 pci, vdev, 00:03:42.954 mempool: 00:03:42.954 ring, 00:03:42.954 dma: 00:03:42.954 00:03:42.954 net: 00:03:42.954 00:03:42.954 crypto: 00:03:42.954 00:03:42.954 compress: 00:03:42.954 00:03:42.954 vdpa: 00:03:42.954 00:03:42.954 00:03:42.954 Message: 00:03:42.954 ================= 00:03:42.954 Content Skipped 00:03:42.954 ================= 00:03:42.954 00:03:42.954 apps: 00:03:42.954 dumpcap: explicitly disabled via build config 00:03:42.954 graph: explicitly disabled via build config 00:03:42.954 pdump: explicitly disabled via build config 00:03:42.954 proc-info: explicitly disabled via build config 00:03:42.954 test-acl: explicitly disabled via build config 00:03:42.954 test-bbdev: explicitly disabled via build config 00:03:42.954 test-cmdline: explicitly disabled via build config 00:03:42.954 test-compress-perf: explicitly disabled via build config 00:03:42.954 test-crypto-perf: explicitly disabled via build config 00:03:42.954 test-dma-perf: explicitly disabled via build config 00:03:42.954 test-eventdev: explicitly disabled via build config 00:03:42.954 test-fib: explicitly disabled via build config 00:03:42.954 test-flow-perf: explicitly disabled via build config 00:03:42.954 test-gpudev: explicitly disabled via build config 00:03:42.954 test-mldev: explicitly disabled via build config 00:03:42.954 test-pipeline: explicitly disabled via build config 00:03:42.954 test-pmd: explicitly disabled via build config 00:03:42.954 test-regex: explicitly disabled via build config 00:03:42.954 test-sad: explicitly disabled via build config 00:03:42.954 test-security-perf: explicitly disabled via build config 00:03:42.954 00:03:42.954 libs: 00:03:42.954 argparse: explicitly disabled via build config 00:03:42.954 metrics: explicitly disabled via build config 00:03:42.954 acl: explicitly disabled via build config 00:03:42.954 bbdev: explicitly disabled via build config 00:03:42.954 bitratestats: explicitly disabled via build config 00:03:42.954 bpf: explicitly disabled via build config 00:03:42.954 cfgfile: explicitly disabled via build config 00:03:42.954 distributor: explicitly disabled via build config 00:03:42.954 efd: explicitly disabled via build config 00:03:42.954 eventdev: explicitly disabled via build config 00:03:42.954 dispatcher: explicitly disabled via build config 00:03:42.954 gpudev: explicitly disabled via build config 00:03:42.954 gro: explicitly disabled via build config 00:03:42.954 gso: explicitly disabled via build config 00:03:42.954 ip_frag: explicitly disabled via build config 00:03:42.954 jobstats: explicitly disabled via build config 00:03:42.954 latencystats: explicitly disabled via build config 00:03:42.954 lpm: explicitly disabled via build config 00:03:42.954 member: explicitly disabled via build config 00:03:42.954 pcapng: explicitly disabled via build config 00:03:42.954 rawdev: explicitly disabled via build config 00:03:42.954 regexdev: explicitly disabled via build config 00:03:42.954 mldev: explicitly disabled via build config 00:03:42.954 rib: explicitly disabled via build config 00:03:42.954 sched: explicitly disabled via build config 00:03:42.954 stack: explicitly disabled via build config 00:03:42.954 ipsec: explicitly disabled via build config 00:03:42.954 pdcp: explicitly disabled via build config 00:03:42.954 fib: explicitly disabled via build config 00:03:42.954 port: explicitly disabled via build config 00:03:42.954 pdump: explicitly disabled via build config 00:03:42.954 table: explicitly disabled via build config 00:03:42.954 pipeline: explicitly disabled via build config 00:03:42.954 graph: explicitly disabled via build config 00:03:42.954 node: explicitly disabled via build config 00:03:42.954 00:03:42.954 drivers: 00:03:42.954 common/cpt: not in enabled drivers build config 00:03:42.954 common/dpaax: not in enabled drivers build config 00:03:42.954 common/iavf: not in enabled drivers build config 00:03:42.954 common/idpf: not in enabled drivers build config 00:03:42.954 common/ionic: not in enabled drivers build config 00:03:42.954 common/mvep: not in enabled drivers build config 00:03:42.954 common/octeontx: not in enabled drivers build config 00:03:42.954 bus/auxiliary: not in enabled drivers build config 00:03:42.954 bus/cdx: not in enabled drivers build config 00:03:42.954 bus/dpaa: not in enabled drivers build config 00:03:42.954 bus/fslmc: not in enabled drivers build config 00:03:42.954 bus/ifpga: not in enabled drivers build config 00:03:42.954 bus/platform: not in enabled drivers build config 00:03:42.954 bus/uacce: not in enabled drivers build config 00:03:42.955 bus/vmbus: not in enabled drivers build config 00:03:42.955 common/cnxk: not in enabled drivers build config 00:03:42.955 common/mlx5: not in enabled drivers build config 00:03:42.955 common/nfp: not in enabled drivers build config 00:03:42.955 common/nitrox: not in enabled drivers build config 00:03:42.955 common/qat: not in enabled drivers build config 00:03:42.955 common/sfc_efx: not in enabled drivers build config 00:03:42.955 mempool/bucket: not in enabled drivers build config 00:03:42.955 mempool/cnxk: not in enabled drivers build config 00:03:42.955 mempool/dpaa: not in enabled drivers build config 00:03:42.955 mempool/dpaa2: not in enabled drivers build config 00:03:42.955 mempool/octeontx: not in enabled drivers build config 00:03:42.955 mempool/stack: not in enabled drivers build config 00:03:42.955 dma/cnxk: not in enabled drivers build config 00:03:42.955 dma/dpaa: not in enabled drivers build config 00:03:42.955 dma/dpaa2: not in enabled drivers build config 00:03:42.955 dma/hisilicon: not in enabled drivers build config 00:03:42.955 dma/idxd: not in enabled drivers build config 00:03:42.955 dma/ioat: not in enabled drivers build config 00:03:42.955 dma/skeleton: not in enabled drivers build config 00:03:42.955 net/af_packet: not in enabled drivers build config 00:03:42.955 net/af_xdp: not in enabled drivers build config 00:03:42.955 net/ark: not in enabled drivers build config 00:03:42.955 net/atlantic: not in enabled drivers build config 00:03:42.955 net/avp: not in enabled drivers build config 00:03:42.955 net/axgbe: not in enabled drivers build config 00:03:42.955 net/bnx2x: not in enabled drivers build config 00:03:42.955 net/bnxt: not in enabled drivers build config 00:03:42.955 net/bonding: not in enabled drivers build config 00:03:42.955 net/cnxk: not in enabled drivers build config 00:03:42.955 net/cpfl: not in enabled drivers build config 00:03:42.955 net/cxgbe: not in enabled drivers build config 00:03:42.955 net/dpaa: not in enabled drivers build config 00:03:42.955 net/dpaa2: not in enabled drivers build config 00:03:42.955 net/e1000: not in enabled drivers build config 00:03:42.955 net/ena: not in enabled drivers build config 00:03:42.955 net/enetc: not in enabled drivers build config 00:03:42.955 net/enetfec: not in enabled drivers build config 00:03:42.955 net/enic: not in enabled drivers build config 00:03:42.955 net/failsafe: not in enabled drivers build config 00:03:42.955 net/fm10k: not in enabled drivers build config 00:03:42.955 net/gve: not in enabled drivers build config 00:03:42.955 net/hinic: not in enabled drivers build config 00:03:42.955 net/hns3: not in enabled drivers build config 00:03:42.955 net/i40e: not in enabled drivers build config 00:03:42.955 net/iavf: not in enabled drivers build config 00:03:42.955 net/ice: not in enabled drivers build config 00:03:42.955 net/idpf: not in enabled drivers build config 00:03:42.955 net/igc: not in enabled drivers build config 00:03:42.955 net/ionic: not in enabled drivers build config 00:03:42.955 net/ipn3ke: not in enabled drivers build config 00:03:42.955 net/ixgbe: not in enabled drivers build config 00:03:42.955 net/mana: not in enabled drivers build config 00:03:42.955 net/memif: not in enabled drivers build config 00:03:42.955 net/mlx4: not in enabled drivers build config 00:03:42.955 net/mlx5: not in enabled drivers build config 00:03:42.955 net/mvneta: not in enabled drivers build config 00:03:42.955 net/mvpp2: not in enabled drivers build config 00:03:42.955 net/netvsc: not in enabled drivers build config 00:03:42.955 net/nfb: not in enabled drivers build config 00:03:42.955 net/nfp: not in enabled drivers build config 00:03:42.955 net/ngbe: not in enabled drivers build config 00:03:42.955 net/null: not in enabled drivers build config 00:03:42.955 net/octeontx: not in enabled drivers build config 00:03:42.955 net/octeon_ep: not in enabled drivers build config 00:03:42.955 net/pcap: not in enabled drivers build config 00:03:42.955 net/pfe: not in enabled drivers build config 00:03:42.955 net/qede: not in enabled drivers build config 00:03:42.955 net/ring: not in enabled drivers build config 00:03:42.955 net/sfc: not in enabled drivers build config 00:03:42.955 net/softnic: not in enabled drivers build config 00:03:42.955 net/tap: not in enabled drivers build config 00:03:42.955 net/thunderx: not in enabled drivers build config 00:03:42.955 net/txgbe: not in enabled drivers build config 00:03:42.955 net/vdev_netvsc: not in enabled drivers build config 00:03:42.955 net/vhost: not in enabled drivers build config 00:03:42.955 net/virtio: not in enabled drivers build config 00:03:42.955 net/vmxnet3: not in enabled drivers build config 00:03:42.955 raw/*: missing internal dependency, "rawdev" 00:03:42.955 crypto/armv8: not in enabled drivers build config 00:03:42.955 crypto/bcmfs: not in enabled drivers build config 00:03:42.955 crypto/caam_jr: not in enabled drivers build config 00:03:42.955 crypto/ccp: not in enabled drivers build config 00:03:42.955 crypto/cnxk: not in enabled drivers build config 00:03:42.955 crypto/dpaa_sec: not in enabled drivers build config 00:03:42.955 crypto/dpaa2_sec: not in enabled drivers build config 00:03:42.955 crypto/ipsec_mb: not in enabled drivers build config 00:03:42.955 crypto/mlx5: not in enabled drivers build config 00:03:42.955 crypto/mvsam: not in enabled drivers build config 00:03:42.955 crypto/nitrox: not in enabled drivers build config 00:03:42.955 crypto/null: not in enabled drivers build config 00:03:42.955 crypto/octeontx: not in enabled drivers build config 00:03:42.955 crypto/openssl: not in enabled drivers build config 00:03:42.955 crypto/scheduler: not in enabled drivers build config 00:03:42.955 crypto/uadk: not in enabled drivers build config 00:03:42.955 crypto/virtio: not in enabled drivers build config 00:03:42.955 compress/isal: not in enabled drivers build config 00:03:42.955 compress/mlx5: not in enabled drivers build config 00:03:42.955 compress/nitrox: not in enabled drivers build config 00:03:42.955 compress/octeontx: not in enabled drivers build config 00:03:42.955 compress/zlib: not in enabled drivers build config 00:03:42.955 regex/*: missing internal dependency, "regexdev" 00:03:42.955 ml/*: missing internal dependency, "mldev" 00:03:42.955 vdpa/ifc: not in enabled drivers build config 00:03:42.955 vdpa/mlx5: not in enabled drivers build config 00:03:42.955 vdpa/nfp: not in enabled drivers build config 00:03:42.955 vdpa/sfc: not in enabled drivers build config 00:03:42.955 event/*: missing internal dependency, "eventdev" 00:03:42.955 baseband/*: missing internal dependency, "bbdev" 00:03:42.955 gpu/*: missing internal dependency, "gpudev" 00:03:42.955 00:03:42.955 00:03:42.955 Build targets in project: 85 00:03:42.955 00:03:42.955 DPDK 24.03.0 00:03:42.955 00:03:42.955 User defined options 00:03:42.955 buildtype : debug 00:03:42.955 default_library : shared 00:03:42.955 libdir : lib 00:03:42.955 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:03:42.955 b_sanitize : address 00:03:42.955 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:03:42.955 c_link_args : 00:03:42.955 cpu_instruction_set: native 00:03:42.955 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:03:42.955 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:03:42.955 enable_docs : false 00:03:42.955 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:03:42.955 enable_kmods : false 00:03:42.955 max_lcores : 128 00:03:42.955 tests : false 00:03:42.955 00:03:42.955 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:43.521 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:03:43.521 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:03:43.521 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:03:43.521 [3/268] Linking static target lib/librte_kvargs.a 00:03:43.521 [4/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:03:43.779 [5/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:03:43.779 [6/268] Linking static target lib/librte_log.a 00:03:44.037 [7/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:03:44.037 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:03:44.037 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:03:44.037 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:03:44.295 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:03:44.295 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:03:44.295 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:03:44.295 [14/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:03:44.295 [15/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:03:44.295 [16/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:03:44.295 [17/268] Linking static target lib/librte_telemetry.a 00:03:44.295 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:03:44.860 [19/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:03:44.860 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:03:44.860 [21/268] Linking target lib/librte_log.so.24.1 00:03:44.860 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:03:44.860 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:03:44.860 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:03:44.860 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:03:45.118 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:03:45.118 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:03:45.118 [28/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:03:45.118 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:03:45.118 [30/268] Linking target lib/librte_kvargs.so.24.1 00:03:45.118 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:03:45.377 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:03:45.377 [33/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:03:45.377 [34/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:03:45.377 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:03:45.377 [36/268] Linking target lib/librte_telemetry.so.24.1 00:03:45.635 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:03:45.635 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:03:45.635 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:03:45.635 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:03:45.635 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:03:45.635 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:03:45.896 [43/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:03:45.896 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:03:45.896 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:03:46.170 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:03:46.170 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:03:46.170 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:03:46.170 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:03:46.428 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:03:46.428 [51/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:03:46.428 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:03:46.686 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:03:46.686 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:03:46.945 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:03:46.945 [56/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:03:46.945 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:03:46.945 [58/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:03:46.945 [59/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:03:46.945 [60/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:03:47.204 [61/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:03:47.204 [62/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:03:47.204 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:03:47.463 [64/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:03:47.463 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:03:47.463 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:03:47.721 [67/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:03:47.721 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:03:47.721 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:03:47.978 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:03:47.978 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:03:47.978 [72/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:03:47.978 [73/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:03:48.236 [74/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:03:48.236 [75/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:03:48.236 [76/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:03:48.236 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:03:48.236 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:03:48.236 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:03:48.494 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:03:48.494 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:03:48.494 [82/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:03:48.494 [83/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:03:48.494 [84/268] Linking static target lib/librte_ring.a 00:03:48.494 [85/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:03:48.752 [86/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:03:48.752 [87/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:03:48.752 [88/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:03:49.010 [89/268] Linking static target lib/librte_eal.a 00:03:49.010 [90/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:03:49.010 [91/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:03:49.010 [92/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:03:49.010 [93/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:03:49.268 [94/268] Linking static target lib/librte_rcu.a 00:03:49.268 [95/268] Linking static target lib/librte_mempool.a 00:03:49.268 [96/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:03:49.268 [97/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:03:49.268 [98/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:03:49.526 [99/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:03:49.526 [100/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:03:49.526 [101/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:03:49.784 [102/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:03:49.785 [103/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:03:49.785 [104/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:03:49.785 [105/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:03:50.043 [106/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:03:50.043 [107/268] Linking static target lib/librte_meter.a 00:03:50.043 [108/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:03:50.043 [109/268] Linking static target lib/librte_net.a 00:03:50.043 [110/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:03:50.043 [111/268] Linking static target lib/librte_mbuf.a 00:03:50.301 [112/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:03:50.301 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:03:50.301 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:03:50.301 [115/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:03:50.301 [116/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:03:50.560 [117/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:03:50.560 [118/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:03:50.818 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:03:51.076 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:03:51.076 [121/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:03:51.334 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:03:51.334 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:03:51.334 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:03:51.334 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:03:51.592 [126/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:03:51.592 [127/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:03:51.592 [128/268] Linking static target lib/librte_pci.a 00:03:51.592 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:03:51.592 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:03:51.851 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:03:51.851 [132/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:03:51.851 [133/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:03:51.851 [134/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:03:52.109 [135/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:03:52.109 [136/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:03:52.109 [137/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:52.109 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:03:52.109 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:03:52.109 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:03:52.109 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:03:52.109 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:03:52.109 [143/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:03:52.109 [144/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:03:52.367 [145/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:03:52.367 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:03:52.367 [147/268] Linking static target lib/librte_cmdline.a 00:03:52.934 [148/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:03:52.934 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:03:52.934 [150/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:03:52.934 [151/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:03:52.934 [152/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:03:52.934 [153/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:03:52.934 [154/268] Linking static target lib/librte_timer.a 00:03:53.192 [155/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:03:53.192 [156/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:03:53.450 [157/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:03:53.450 [158/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:03:53.450 [159/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:03:53.709 [160/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:03:53.709 [161/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:03:53.709 [162/268] Linking static target lib/librte_ethdev.a 00:03:53.709 [163/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:03:53.709 [164/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:03:53.709 [165/268] Linking static target lib/librte_dmadev.a 00:03:53.968 [166/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:03:53.968 [167/268] Linking static target lib/librte_compressdev.a 00:03:53.968 [168/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:53.968 [169/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:03:54.227 [170/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:03:54.227 [171/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:03:54.227 [172/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:03:54.227 [173/268] Linking static target lib/librte_hash.a 00:03:54.227 [174/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:03:54.485 [175/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:03:54.485 [176/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:03:54.485 [177/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:03:54.743 [178/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:03:54.743 [179/268] Linking static target lib/librte_cryptodev.a 00:03:54.743 [180/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:03:54.743 [181/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:54.743 [182/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:03:55.001 [183/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:55.258 [184/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:03:55.258 [185/268] Linking static target lib/librte_power.a 00:03:55.258 [186/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:03:55.258 [187/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:03:55.258 [188/268] Linking static target lib/librte_reorder.a 00:03:55.258 [189/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:03:55.258 [190/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:03:55.516 [191/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:03:55.516 [192/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:03:55.516 [193/268] Linking static target lib/librte_security.a 00:03:55.774 [194/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:03:56.341 [195/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:03:56.341 [196/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:03:56.341 [197/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:03:56.341 [198/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:03:56.341 [199/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:56.599 [200/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:03:56.869 [201/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:56.869 [202/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:03:57.127 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:57.127 [204/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:57.127 [205/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:57.127 [206/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:57.127 [207/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:57.385 [208/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:57.385 [209/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:57.385 [210/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:57.385 [211/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:57.643 [212/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:57.643 [213/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:57.643 [214/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:57.643 [215/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:57.643 [216/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:57.643 [217/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:57.643 [218/268] Linking static target drivers/librte_bus_vdev.a 00:03:57.643 [219/268] Linking static target drivers/librte_bus_pci.a 00:03:57.643 [220/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:57.643 [221/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:57.901 [222/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:57.901 [223/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:57.901 [224/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:57.901 [225/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:58.158 [226/268] Linking static target drivers/librte_mempool_ring.a 00:03:58.158 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:59.094 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:04:00.471 [229/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:04:00.471 [230/268] Linking target lib/librte_eal.so.24.1 00:04:00.730 [231/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:04:00.730 [232/268] Linking target lib/librte_ring.so.24.1 00:04:00.730 [233/268] Linking target lib/librte_pci.so.24.1 00:04:00.730 [234/268] Linking target lib/librte_meter.so.24.1 00:04:00.730 [235/268] Linking target drivers/librte_bus_vdev.so.24.1 00:04:00.730 [236/268] Linking target lib/librte_dmadev.so.24.1 00:04:00.730 [237/268] Linking target lib/librte_timer.so.24.1 00:04:00.730 [238/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:04:00.990 [239/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:04:00.990 [240/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:04:00.990 [241/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:04:00.990 [242/268] Linking target lib/librte_rcu.so.24.1 00:04:00.990 [243/268] Linking target drivers/librte_bus_pci.so.24.1 00:04:00.990 [244/268] Linking target lib/librte_mempool.so.24.1 00:04:00.990 [245/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:04:00.990 [246/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:04:00.990 [247/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:04:00.990 [248/268] Linking target drivers/librte_mempool_ring.so.24.1 00:04:00.990 [249/268] Linking target lib/librte_mbuf.so.24.1 00:04:01.249 [250/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:04:01.249 [251/268] Linking target lib/librte_net.so.24.1 00:04:01.249 [252/268] Linking target lib/librte_compressdev.so.24.1 00:04:01.249 [253/268] Linking target lib/librte_reorder.so.24.1 00:04:01.249 [254/268] Linking target lib/librte_cryptodev.so.24.1 00:04:01.508 [255/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:04:01.508 [256/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:04:01.508 [257/268] Linking target lib/librte_hash.so.24.1 00:04:01.508 [258/268] Linking target lib/librte_cmdline.so.24.1 00:04:01.508 [259/268] Linking target lib/librte_security.so.24.1 00:04:01.766 [260/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:04:02.704 [261/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:02.704 [262/268] Linking target lib/librte_ethdev.so.24.1 00:04:02.704 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:04:02.704 [264/268] Linking target lib/librte_power.so.24.1 00:04:03.273 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:04:03.273 [266/268] Linking static target lib/librte_vhost.a 00:04:05.808 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:04:05.809 [268/268] Linking target lib/librte_vhost.so.24.1 00:04:05.809 INFO: autodetecting backend as ninja 00:04:05.809 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:04:23.939 CC lib/log/log.o 00:04:23.939 CC lib/log/log_deprecated.o 00:04:23.939 CC lib/log/log_flags.o 00:04:23.939 CC lib/ut/ut.o 00:04:23.939 CC lib/ut_mock/mock.o 00:04:23.939 LIB libspdk_ut.a 00:04:23.939 LIB libspdk_log.a 00:04:23.939 LIB libspdk_ut_mock.a 00:04:23.939 SO libspdk_ut_mock.so.6.0 00:04:23.939 SO libspdk_ut.so.2.0 00:04:23.939 SO libspdk_log.so.7.1 00:04:23.939 SYMLINK libspdk_ut_mock.so 00:04:23.939 SYMLINK libspdk_ut.so 00:04:23.939 SYMLINK libspdk_log.so 00:04:23.939 CXX lib/trace_parser/trace.o 00:04:23.939 CC lib/dma/dma.o 00:04:23.939 CC lib/ioat/ioat.o 00:04:23.939 CC lib/util/bit_array.o 00:04:23.939 CC lib/util/base64.o 00:04:23.939 CC lib/util/cpuset.o 00:04:23.939 CC lib/util/crc16.o 00:04:23.939 CC lib/util/crc32.o 00:04:23.939 CC lib/util/crc32c.o 00:04:23.939 CC lib/vfio_user/host/vfio_user_pci.o 00:04:23.939 CC lib/util/crc32_ieee.o 00:04:23.939 CC lib/vfio_user/host/vfio_user.o 00:04:23.939 CC lib/util/crc64.o 00:04:23.939 CC lib/util/dif.o 00:04:23.939 LIB libspdk_dma.a 00:04:23.939 SO libspdk_dma.so.5.0 00:04:23.939 CC lib/util/fd.o 00:04:23.939 CC lib/util/fd_group.o 00:04:23.939 SYMLINK libspdk_dma.so 00:04:23.939 CC lib/util/file.o 00:04:23.939 CC lib/util/hexlify.o 00:04:23.939 LIB libspdk_ioat.a 00:04:23.939 CC lib/util/iov.o 00:04:23.939 SO libspdk_ioat.so.7.0 00:04:23.939 CC lib/util/math.o 00:04:23.939 LIB libspdk_vfio_user.a 00:04:23.939 SYMLINK libspdk_ioat.so 00:04:23.939 CC lib/util/net.o 00:04:23.939 CC lib/util/pipe.o 00:04:23.939 SO libspdk_vfio_user.so.5.0 00:04:23.939 CC lib/util/strerror_tls.o 00:04:23.939 CC lib/util/string.o 00:04:23.939 SYMLINK libspdk_vfio_user.so 00:04:23.939 CC lib/util/uuid.o 00:04:23.939 CC lib/util/xor.o 00:04:23.939 CC lib/util/zipf.o 00:04:23.939 CC lib/util/md5.o 00:04:24.509 LIB libspdk_util.a 00:04:24.509 LIB libspdk_trace_parser.a 00:04:24.509 SO libspdk_util.so.10.1 00:04:24.509 SO libspdk_trace_parser.so.6.0 00:04:24.771 SYMLINK libspdk_util.so 00:04:24.771 SYMLINK libspdk_trace_parser.so 00:04:24.771 CC lib/json/json_parse.o 00:04:24.771 CC lib/json/json_util.o 00:04:24.771 CC lib/json/json_write.o 00:04:24.771 CC lib/idxd/idxd_user.o 00:04:24.771 CC lib/idxd/idxd.o 00:04:24.771 CC lib/idxd/idxd_kernel.o 00:04:24.771 CC lib/rdma_utils/rdma_utils.o 00:04:24.771 CC lib/vmd/vmd.o 00:04:24.771 CC lib/conf/conf.o 00:04:24.771 CC lib/env_dpdk/env.o 00:04:25.030 CC lib/env_dpdk/memory.o 00:04:25.030 CC lib/env_dpdk/pci.o 00:04:25.030 LIB libspdk_conf.a 00:04:25.030 CC lib/vmd/led.o 00:04:25.289 SO libspdk_conf.so.6.0 00:04:25.289 CC lib/env_dpdk/init.o 00:04:25.289 LIB libspdk_rdma_utils.a 00:04:25.289 LIB libspdk_json.a 00:04:25.289 SO libspdk_rdma_utils.so.1.0 00:04:25.289 SYMLINK libspdk_conf.so 00:04:25.289 CC lib/env_dpdk/threads.o 00:04:25.289 SO libspdk_json.so.6.0 00:04:25.289 SYMLINK libspdk_rdma_utils.so 00:04:25.289 SYMLINK libspdk_json.so 00:04:25.289 CC lib/env_dpdk/pci_ioat.o 00:04:25.289 CC lib/env_dpdk/pci_virtio.o 00:04:25.289 CC lib/env_dpdk/pci_vmd.o 00:04:25.289 CC lib/env_dpdk/pci_idxd.o 00:04:25.550 CC lib/env_dpdk/pci_event.o 00:04:25.550 CC lib/env_dpdk/sigbus_handler.o 00:04:25.550 CC lib/env_dpdk/pci_dpdk.o 00:04:25.550 CC lib/env_dpdk/pci_dpdk_2207.o 00:04:25.551 CC lib/env_dpdk/pci_dpdk_2211.o 00:04:25.551 LIB libspdk_idxd.a 00:04:25.551 SO libspdk_idxd.so.12.1 00:04:25.811 SYMLINK libspdk_idxd.so 00:04:25.811 LIB libspdk_vmd.a 00:04:25.811 CC lib/jsonrpc/jsonrpc_server.o 00:04:25.811 CC lib/jsonrpc/jsonrpc_client.o 00:04:25.811 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:04:25.811 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:04:25.811 SO libspdk_vmd.so.6.0 00:04:25.811 CC lib/rdma_provider/common.o 00:04:25.811 CC lib/rdma_provider/rdma_provider_verbs.o 00:04:25.811 SYMLINK libspdk_vmd.so 00:04:26.071 LIB libspdk_rdma_provider.a 00:04:26.071 SO libspdk_rdma_provider.so.7.0 00:04:26.071 LIB libspdk_jsonrpc.a 00:04:26.071 SYMLINK libspdk_rdma_provider.so 00:04:26.331 SO libspdk_jsonrpc.so.6.0 00:04:26.331 SYMLINK libspdk_jsonrpc.so 00:04:26.590 CC lib/rpc/rpc.o 00:04:26.849 LIB libspdk_env_dpdk.a 00:04:26.849 SO libspdk_env_dpdk.so.15.1 00:04:26.849 LIB libspdk_rpc.a 00:04:27.108 SO libspdk_rpc.so.6.0 00:04:27.108 SYMLINK libspdk_rpc.so 00:04:27.108 SYMLINK libspdk_env_dpdk.so 00:04:27.368 CC lib/trace/trace.o 00:04:27.368 CC lib/trace/trace_flags.o 00:04:27.368 CC lib/trace/trace_rpc.o 00:04:27.368 CC lib/keyring/keyring.o 00:04:27.368 CC lib/keyring/keyring_rpc.o 00:04:27.368 CC lib/notify/notify.o 00:04:27.368 CC lib/notify/notify_rpc.o 00:04:27.627 LIB libspdk_notify.a 00:04:27.627 LIB libspdk_trace.a 00:04:27.627 LIB libspdk_keyring.a 00:04:27.627 SO libspdk_notify.so.6.0 00:04:27.627 SO libspdk_trace.so.11.0 00:04:27.627 SO libspdk_keyring.so.2.0 00:04:27.887 SYMLINK libspdk_notify.so 00:04:27.887 SYMLINK libspdk_trace.so 00:04:27.887 SYMLINK libspdk_keyring.so 00:04:28.147 CC lib/sock/sock_rpc.o 00:04:28.147 CC lib/sock/sock.o 00:04:28.147 CC lib/thread/thread.o 00:04:28.147 CC lib/thread/iobuf.o 00:04:28.716 LIB libspdk_sock.a 00:04:28.716 SO libspdk_sock.so.10.0 00:04:28.716 SYMLINK libspdk_sock.so 00:04:29.282 CC lib/nvme/nvme_ctrlr_cmd.o 00:04:29.282 CC lib/nvme/nvme_ctrlr.o 00:04:29.282 CC lib/nvme/nvme_fabric.o 00:04:29.282 CC lib/nvme/nvme_ns_cmd.o 00:04:29.282 CC lib/nvme/nvme_ns.o 00:04:29.282 CC lib/nvme/nvme_pcie.o 00:04:29.282 CC lib/nvme/nvme.o 00:04:29.282 CC lib/nvme/nvme_pcie_common.o 00:04:29.282 CC lib/nvme/nvme_qpair.o 00:04:29.848 LIB libspdk_thread.a 00:04:29.848 CC lib/nvme/nvme_quirks.o 00:04:29.848 CC lib/nvme/nvme_transport.o 00:04:30.107 SO libspdk_thread.so.11.0 00:04:30.107 CC lib/nvme/nvme_discovery.o 00:04:30.107 SYMLINK libspdk_thread.so 00:04:30.107 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:04:30.107 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:04:30.107 CC lib/nvme/nvme_tcp.o 00:04:30.107 CC lib/nvme/nvme_opal.o 00:04:30.365 CC lib/nvme/nvme_io_msg.o 00:04:30.696 CC lib/nvme/nvme_poll_group.o 00:04:30.696 CC lib/nvme/nvme_zns.o 00:04:30.696 CC lib/nvme/nvme_stubs.o 00:04:30.696 CC lib/nvme/nvme_auth.o 00:04:30.696 CC lib/nvme/nvme_cuse.o 00:04:30.971 CC lib/nvme/nvme_rdma.o 00:04:30.971 CC lib/accel/accel.o 00:04:30.971 CC lib/blob/blobstore.o 00:04:31.229 CC lib/accel/accel_rpc.o 00:04:31.486 CC lib/init/json_config.o 00:04:31.486 CC lib/virtio/virtio.o 00:04:31.486 CC lib/virtio/virtio_vhost_user.o 00:04:31.743 CC lib/init/subsystem.o 00:04:31.743 CC lib/virtio/virtio_vfio_user.o 00:04:31.743 CC lib/blob/request.o 00:04:31.743 CC lib/blob/zeroes.o 00:04:31.743 CC lib/blob/blob_bs_dev.o 00:04:32.000 CC lib/init/subsystem_rpc.o 00:04:32.000 CC lib/virtio/virtio_pci.o 00:04:32.000 CC lib/init/rpc.o 00:04:32.000 CC lib/accel/accel_sw.o 00:04:32.259 LIB libspdk_init.a 00:04:32.259 SO libspdk_init.so.6.0 00:04:32.259 CC lib/fsdev/fsdev.o 00:04:32.259 CC lib/fsdev/fsdev_io.o 00:04:32.259 CC lib/fsdev/fsdev_rpc.o 00:04:32.259 SYMLINK libspdk_init.so 00:04:32.259 LIB libspdk_virtio.a 00:04:32.259 SO libspdk_virtio.so.7.0 00:04:32.518 LIB libspdk_accel.a 00:04:32.518 SYMLINK libspdk_virtio.so 00:04:32.518 SO libspdk_accel.so.16.0 00:04:32.518 CC lib/event/app.o 00:04:32.518 CC lib/event/app_rpc.o 00:04:32.518 CC lib/event/scheduler_static.o 00:04:32.518 CC lib/event/reactor.o 00:04:32.518 CC lib/event/log_rpc.o 00:04:32.518 LIB libspdk_nvme.a 00:04:32.518 SYMLINK libspdk_accel.so 00:04:32.777 SO libspdk_nvme.so.15.0 00:04:32.777 CC lib/bdev/bdev.o 00:04:32.777 CC lib/bdev/bdev_rpc.o 00:04:32.777 CC lib/bdev/bdev_zone.o 00:04:32.777 CC lib/bdev/part.o 00:04:32.777 CC lib/bdev/scsi_nvme.o 00:04:33.036 SYMLINK libspdk_nvme.so 00:04:33.036 LIB libspdk_fsdev.a 00:04:33.036 SO libspdk_fsdev.so.2.0 00:04:33.295 LIB libspdk_event.a 00:04:33.295 SYMLINK libspdk_fsdev.so 00:04:33.295 SO libspdk_event.so.14.0 00:04:33.295 SYMLINK libspdk_event.so 00:04:33.554 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:04:34.491 LIB libspdk_fuse_dispatcher.a 00:04:34.491 SO libspdk_fuse_dispatcher.so.1.0 00:04:34.491 SYMLINK libspdk_fuse_dispatcher.so 00:04:35.058 LIB libspdk_blob.a 00:04:35.058 SO libspdk_blob.so.12.0 00:04:35.317 SYMLINK libspdk_blob.so 00:04:35.577 CC lib/lvol/lvol.o 00:04:35.577 CC lib/blobfs/blobfs.o 00:04:35.577 CC lib/blobfs/tree.o 00:04:36.146 LIB libspdk_bdev.a 00:04:36.147 SO libspdk_bdev.so.17.0 00:04:36.147 SYMLINK libspdk_bdev.so 00:04:36.406 CC lib/nbd/nbd.o 00:04:36.406 CC lib/ublk/ublk.o 00:04:36.406 CC lib/nbd/nbd_rpc.o 00:04:36.406 CC lib/ublk/ublk_rpc.o 00:04:36.406 CC lib/scsi/dev.o 00:04:36.406 CC lib/scsi/lun.o 00:04:36.406 CC lib/nvmf/ctrlr.o 00:04:36.406 CC lib/ftl/ftl_core.o 00:04:36.666 LIB libspdk_blobfs.a 00:04:36.666 SO libspdk_blobfs.so.11.0 00:04:36.666 CC lib/ftl/ftl_init.o 00:04:36.666 CC lib/ftl/ftl_layout.o 00:04:36.666 SYMLINK libspdk_blobfs.so 00:04:36.666 CC lib/nvmf/ctrlr_discovery.o 00:04:36.666 CC lib/nvmf/ctrlr_bdev.o 00:04:36.666 LIB libspdk_lvol.a 00:04:36.931 SO libspdk_lvol.so.11.0 00:04:36.931 CC lib/scsi/port.o 00:04:36.931 SYMLINK libspdk_lvol.so 00:04:36.931 CC lib/ftl/ftl_debug.o 00:04:36.931 CC lib/ftl/ftl_io.o 00:04:36.931 CC lib/nvmf/subsystem.o 00:04:36.931 CC lib/scsi/scsi.o 00:04:36.931 LIB libspdk_nbd.a 00:04:37.198 CC lib/ftl/ftl_sb.o 00:04:37.198 SO libspdk_nbd.so.7.0 00:04:37.198 CC lib/ftl/ftl_l2p.o 00:04:37.198 SYMLINK libspdk_nbd.so 00:04:37.198 CC lib/ftl/ftl_l2p_flat.o 00:04:37.198 CC lib/nvmf/nvmf.o 00:04:37.198 CC lib/scsi/scsi_bdev.o 00:04:37.198 LIB libspdk_ublk.a 00:04:37.198 CC lib/ftl/ftl_nv_cache.o 00:04:37.198 CC lib/nvmf/nvmf_rpc.o 00:04:37.198 SO libspdk_ublk.so.3.0 00:04:37.459 CC lib/nvmf/transport.o 00:04:37.459 SYMLINK libspdk_ublk.so 00:04:37.459 CC lib/nvmf/tcp.o 00:04:37.459 CC lib/nvmf/stubs.o 00:04:37.719 CC lib/nvmf/mdns_server.o 00:04:37.979 CC lib/scsi/scsi_pr.o 00:04:37.979 CC lib/nvmf/rdma.o 00:04:38.239 CC lib/scsi/scsi_rpc.o 00:04:38.239 CC lib/nvmf/auth.o 00:04:38.239 CC lib/scsi/task.o 00:04:38.239 CC lib/ftl/ftl_band.o 00:04:38.498 CC lib/ftl/ftl_band_ops.o 00:04:38.498 CC lib/ftl/ftl_writer.o 00:04:38.498 CC lib/ftl/ftl_rq.o 00:04:38.498 LIB libspdk_scsi.a 00:04:38.498 CC lib/ftl/ftl_reloc.o 00:04:38.498 SO libspdk_scsi.so.9.0 00:04:38.757 CC lib/ftl/ftl_l2p_cache.o 00:04:38.757 SYMLINK libspdk_scsi.so 00:04:38.757 CC lib/ftl/ftl_p2l.o 00:04:38.757 CC lib/ftl/ftl_p2l_log.o 00:04:38.757 CC lib/ftl/mngt/ftl_mngt.o 00:04:39.016 CC lib/iscsi/conn.o 00:04:39.016 CC lib/iscsi/init_grp.o 00:04:39.016 CC lib/vhost/vhost.o 00:04:39.275 CC lib/vhost/vhost_rpc.o 00:04:39.275 CC lib/vhost/vhost_scsi.o 00:04:39.275 CC lib/vhost/vhost_blk.o 00:04:39.275 CC lib/vhost/rte_vhost_user.o 00:04:39.275 CC lib/iscsi/iscsi.o 00:04:39.275 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:39.535 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:39.535 CC lib/iscsi/param.o 00:04:39.535 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:39.794 CC lib/iscsi/portal_grp.o 00:04:39.794 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:39.794 CC lib/iscsi/tgt_node.o 00:04:40.054 CC lib/iscsi/iscsi_subsystem.o 00:04:40.054 CC lib/iscsi/iscsi_rpc.o 00:04:40.054 CC lib/iscsi/task.o 00:04:40.314 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:40.314 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:40.314 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:40.314 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:40.314 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:40.573 LIB libspdk_vhost.a 00:04:40.573 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:40.573 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:40.573 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:40.573 CC lib/ftl/utils/ftl_conf.o 00:04:40.573 SO libspdk_vhost.so.8.0 00:04:40.573 CC lib/ftl/utils/ftl_md.o 00:04:40.573 CC lib/ftl/utils/ftl_mempool.o 00:04:40.573 SYMLINK libspdk_vhost.so 00:04:40.573 CC lib/ftl/utils/ftl_bitmap.o 00:04:40.833 CC lib/ftl/utils/ftl_property.o 00:04:40.833 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:40.833 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:40.833 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:40.833 LIB libspdk_nvmf.a 00:04:40.833 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:40.833 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:40.833 SO libspdk_nvmf.so.20.0 00:04:41.091 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:41.091 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:04:41.091 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:41.091 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:41.091 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:41.091 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:41.091 LIB libspdk_iscsi.a 00:04:41.091 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:04:41.091 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:04:41.091 SO libspdk_iscsi.so.8.0 00:04:41.091 SYMLINK libspdk_nvmf.so 00:04:41.091 CC lib/ftl/base/ftl_base_dev.o 00:04:41.091 CC lib/ftl/base/ftl_base_bdev.o 00:04:41.091 CC lib/ftl/ftl_trace.o 00:04:41.350 SYMLINK libspdk_iscsi.so 00:04:41.350 LIB libspdk_ftl.a 00:04:41.609 SO libspdk_ftl.so.9.0 00:04:41.868 SYMLINK libspdk_ftl.so 00:04:42.436 CC module/env_dpdk/env_dpdk_rpc.o 00:04:42.436 CC module/scheduler/gscheduler/gscheduler.o 00:04:42.436 CC module/keyring/file/keyring.o 00:04:42.436 CC module/accel/error/accel_error.o 00:04:42.436 CC module/blob/bdev/blob_bdev.o 00:04:42.436 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:42.436 CC module/keyring/linux/keyring.o 00:04:42.436 CC module/fsdev/aio/fsdev_aio.o 00:04:42.436 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:42.437 CC module/sock/posix/posix.o 00:04:42.695 LIB libspdk_env_dpdk_rpc.a 00:04:42.695 SO libspdk_env_dpdk_rpc.so.6.0 00:04:42.695 LIB libspdk_scheduler_gscheduler.a 00:04:42.695 CC module/keyring/linux/keyring_rpc.o 00:04:42.695 LIB libspdk_scheduler_dpdk_governor.a 00:04:42.695 SO libspdk_scheduler_gscheduler.so.4.0 00:04:42.695 SO libspdk_scheduler_dpdk_governor.so.4.0 00:04:42.695 CC module/accel/error/accel_error_rpc.o 00:04:42.695 SYMLINK libspdk_env_dpdk_rpc.so 00:04:42.695 CC module/keyring/file/keyring_rpc.o 00:04:42.695 SYMLINK libspdk_scheduler_gscheduler.so 00:04:42.695 CC module/fsdev/aio/fsdev_aio_rpc.o 00:04:42.695 CC module/fsdev/aio/linux_aio_mgr.o 00:04:42.695 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:42.695 LIB libspdk_scheduler_dynamic.a 00:04:42.695 LIB libspdk_keyring_linux.a 00:04:42.695 LIB libspdk_blob_bdev.a 00:04:42.695 SO libspdk_scheduler_dynamic.so.4.0 00:04:42.695 SO libspdk_keyring_linux.so.1.0 00:04:42.953 SO libspdk_blob_bdev.so.12.0 00:04:42.953 LIB libspdk_accel_error.a 00:04:42.953 SYMLINK libspdk_scheduler_dynamic.so 00:04:42.953 LIB libspdk_keyring_file.a 00:04:42.953 SYMLINK libspdk_blob_bdev.so 00:04:42.953 SYMLINK libspdk_keyring_linux.so 00:04:42.953 SO libspdk_accel_error.so.2.0 00:04:42.953 CC module/accel/ioat/accel_ioat.o 00:04:42.953 SO libspdk_keyring_file.so.2.0 00:04:42.953 SYMLINK libspdk_accel_error.so 00:04:42.953 CC module/accel/ioat/accel_ioat_rpc.o 00:04:42.953 SYMLINK libspdk_keyring_file.so 00:04:43.212 CC module/accel/dsa/accel_dsa.o 00:04:43.212 CC module/accel/iaa/accel_iaa.o 00:04:43.212 CC module/accel/iaa/accel_iaa_rpc.o 00:04:43.212 LIB libspdk_accel_ioat.a 00:04:43.212 CC module/bdev/delay/vbdev_delay.o 00:04:43.212 CC module/blobfs/bdev/blobfs_bdev.o 00:04:43.212 CC module/bdev/error/vbdev_error.o 00:04:43.212 SO libspdk_accel_ioat.so.6.0 00:04:43.212 CC module/bdev/gpt/gpt.o 00:04:43.212 SYMLINK libspdk_accel_ioat.so 00:04:43.212 CC module/bdev/gpt/vbdev_gpt.o 00:04:43.470 LIB libspdk_fsdev_aio.a 00:04:43.470 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:43.470 LIB libspdk_accel_iaa.a 00:04:43.470 SO libspdk_fsdev_aio.so.1.0 00:04:43.470 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:43.470 SO libspdk_accel_iaa.so.3.0 00:04:43.470 CC module/accel/dsa/accel_dsa_rpc.o 00:04:43.470 LIB libspdk_sock_posix.a 00:04:43.470 SO libspdk_sock_posix.so.6.0 00:04:43.470 SYMLINK libspdk_fsdev_aio.so 00:04:43.470 SYMLINK libspdk_accel_iaa.so 00:04:43.470 CC module/bdev/error/vbdev_error_rpc.o 00:04:43.470 SYMLINK libspdk_sock_posix.so 00:04:43.470 LIB libspdk_accel_dsa.a 00:04:43.470 LIB libspdk_blobfs_bdev.a 00:04:43.470 LIB libspdk_bdev_gpt.a 00:04:43.729 SO libspdk_accel_dsa.so.5.0 00:04:43.729 SO libspdk_blobfs_bdev.so.6.0 00:04:43.729 CC module/bdev/lvol/vbdev_lvol.o 00:04:43.729 SO libspdk_bdev_gpt.so.6.0 00:04:43.729 LIB libspdk_bdev_delay.a 00:04:43.729 SO libspdk_bdev_delay.so.6.0 00:04:43.729 CC module/bdev/malloc/bdev_malloc.o 00:04:43.729 SYMLINK libspdk_accel_dsa.so 00:04:43.729 LIB libspdk_bdev_error.a 00:04:43.729 SYMLINK libspdk_blobfs_bdev.so 00:04:43.729 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:43.729 CC module/bdev/null/bdev_null.o 00:04:43.729 CC module/bdev/null/bdev_null_rpc.o 00:04:43.729 SYMLINK libspdk_bdev_gpt.so 00:04:43.729 SO libspdk_bdev_error.so.6.0 00:04:43.729 CC module/bdev/nvme/bdev_nvme.o 00:04:43.729 CC module/bdev/passthru/vbdev_passthru.o 00:04:43.729 SYMLINK libspdk_bdev_delay.so 00:04:43.729 SYMLINK libspdk_bdev_error.so 00:04:43.729 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:43.998 CC module/bdev/raid/bdev_raid.o 00:04:43.998 CC module/bdev/split/vbdev_split.o 00:04:43.998 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:43.998 LIB libspdk_bdev_null.a 00:04:43.998 CC module/bdev/aio/bdev_aio.o 00:04:43.998 SO libspdk_bdev_null.so.6.0 00:04:43.998 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:44.272 LIB libspdk_bdev_malloc.a 00:04:44.272 CC module/bdev/split/vbdev_split_rpc.o 00:04:44.272 SYMLINK libspdk_bdev_null.so 00:04:44.272 SO libspdk_bdev_malloc.so.6.0 00:04:44.272 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:44.272 LIB libspdk_bdev_lvol.a 00:04:44.272 SYMLINK libspdk_bdev_malloc.so 00:04:44.272 LIB libspdk_bdev_passthru.a 00:04:44.272 SO libspdk_bdev_lvol.so.6.0 00:04:44.272 SO libspdk_bdev_passthru.so.6.0 00:04:44.272 CC module/bdev/ftl/bdev_ftl.o 00:04:44.272 LIB libspdk_bdev_split.a 00:04:44.272 SO libspdk_bdev_split.so.6.0 00:04:44.272 SYMLINK libspdk_bdev_lvol.so 00:04:44.272 SYMLINK libspdk_bdev_passthru.so 00:04:44.272 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:44.272 CC module/bdev/nvme/nvme_rpc.o 00:04:44.532 CC module/bdev/iscsi/bdev_iscsi.o 00:04:44.532 SYMLINK libspdk_bdev_split.so 00:04:44.532 CC module/bdev/aio/bdev_aio_rpc.o 00:04:44.532 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:44.532 LIB libspdk_bdev_aio.a 00:04:44.532 LIB libspdk_bdev_zone_block.a 00:04:44.532 CC module/bdev/nvme/bdev_mdns_client.o 00:04:44.532 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:44.532 SO libspdk_bdev_aio.so.6.0 00:04:44.532 SO libspdk_bdev_zone_block.so.6.0 00:04:44.532 LIB libspdk_bdev_ftl.a 00:04:44.790 CC module/bdev/nvme/vbdev_opal.o 00:04:44.790 SO libspdk_bdev_ftl.so.6.0 00:04:44.790 SYMLINK libspdk_bdev_zone_block.so 00:04:44.790 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:44.790 SYMLINK libspdk_bdev_aio.so 00:04:44.790 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:44.790 SYMLINK libspdk_bdev_ftl.so 00:04:44.790 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:44.790 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:44.790 CC module/bdev/raid/bdev_raid_rpc.o 00:04:45.048 CC module/bdev/raid/bdev_raid_sb.o 00:04:45.048 LIB libspdk_bdev_iscsi.a 00:04:45.048 CC module/bdev/raid/raid0.o 00:04:45.048 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:45.048 SO libspdk_bdev_iscsi.so.6.0 00:04:45.048 SYMLINK libspdk_bdev_iscsi.so 00:04:45.048 CC module/bdev/raid/raid1.o 00:04:45.048 CC module/bdev/raid/concat.o 00:04:45.048 CC module/bdev/raid/raid5f.o 00:04:45.307 LIB libspdk_bdev_virtio.a 00:04:45.307 SO libspdk_bdev_virtio.so.6.0 00:04:45.307 SYMLINK libspdk_bdev_virtio.so 00:04:45.566 LIB libspdk_bdev_raid.a 00:04:45.825 SO libspdk_bdev_raid.so.6.0 00:04:45.825 SYMLINK libspdk_bdev_raid.so 00:04:46.764 LIB libspdk_bdev_nvme.a 00:04:47.023 SO libspdk_bdev_nvme.so.7.1 00:04:47.023 SYMLINK libspdk_bdev_nvme.so 00:04:47.593 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:47.593 CC module/event/subsystems/scheduler/scheduler.o 00:04:47.593 CC module/event/subsystems/vmd/vmd.o 00:04:47.593 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:47.593 CC module/event/subsystems/sock/sock.o 00:04:47.593 CC module/event/subsystems/keyring/keyring.o 00:04:47.593 CC module/event/subsystems/iobuf/iobuf.o 00:04:47.593 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:47.593 CC module/event/subsystems/fsdev/fsdev.o 00:04:47.593 LIB libspdk_event_scheduler.a 00:04:47.853 LIB libspdk_event_vmd.a 00:04:47.853 LIB libspdk_event_keyring.a 00:04:47.853 LIB libspdk_event_vhost_blk.a 00:04:47.853 LIB libspdk_event_sock.a 00:04:47.853 LIB libspdk_event_iobuf.a 00:04:47.853 SO libspdk_event_keyring.so.1.0 00:04:47.853 SO libspdk_event_scheduler.so.4.0 00:04:47.853 SO libspdk_event_vmd.so.6.0 00:04:47.853 SO libspdk_event_vhost_blk.so.3.0 00:04:47.853 SO libspdk_event_sock.so.5.0 00:04:47.853 LIB libspdk_event_fsdev.a 00:04:47.853 SO libspdk_event_iobuf.so.3.0 00:04:47.853 SO libspdk_event_fsdev.so.1.0 00:04:47.853 SYMLINK libspdk_event_vhost_blk.so 00:04:47.853 SYMLINK libspdk_event_scheduler.so 00:04:47.853 SYMLINK libspdk_event_sock.so 00:04:47.853 SYMLINK libspdk_event_vmd.so 00:04:47.853 SYMLINK libspdk_event_keyring.so 00:04:47.853 SYMLINK libspdk_event_fsdev.so 00:04:47.853 SYMLINK libspdk_event_iobuf.so 00:04:48.111 CC module/event/subsystems/accel/accel.o 00:04:48.370 LIB libspdk_event_accel.a 00:04:48.370 SO libspdk_event_accel.so.6.0 00:04:48.370 SYMLINK libspdk_event_accel.so 00:04:48.938 CC module/event/subsystems/bdev/bdev.o 00:04:48.938 LIB libspdk_event_bdev.a 00:04:49.196 SO libspdk_event_bdev.so.6.0 00:04:49.196 SYMLINK libspdk_event_bdev.so 00:04:49.455 CC module/event/subsystems/scsi/scsi.o 00:04:49.455 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:49.455 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:49.455 CC module/event/subsystems/nbd/nbd.o 00:04:49.455 CC module/event/subsystems/ublk/ublk.o 00:04:49.714 LIB libspdk_event_scsi.a 00:04:49.714 LIB libspdk_event_nbd.a 00:04:49.714 SO libspdk_event_scsi.so.6.0 00:04:49.714 LIB libspdk_event_ublk.a 00:04:49.714 SO libspdk_event_nbd.so.6.0 00:04:49.714 LIB libspdk_event_nvmf.a 00:04:49.714 SO libspdk_event_ublk.so.3.0 00:04:49.714 SYMLINK libspdk_event_scsi.so 00:04:49.714 SYMLINK libspdk_event_nbd.so 00:04:49.714 SO libspdk_event_nvmf.so.6.0 00:04:49.714 SYMLINK libspdk_event_ublk.so 00:04:49.714 SYMLINK libspdk_event_nvmf.so 00:04:49.973 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:49.973 CC module/event/subsystems/iscsi/iscsi.o 00:04:50.231 LIB libspdk_event_vhost_scsi.a 00:04:50.231 LIB libspdk_event_iscsi.a 00:04:50.231 SO libspdk_event_vhost_scsi.so.3.0 00:04:50.231 SO libspdk_event_iscsi.so.6.0 00:04:50.232 SYMLINK libspdk_event_vhost_scsi.so 00:04:50.232 SYMLINK libspdk_event_iscsi.so 00:04:50.490 SO libspdk.so.6.0 00:04:50.490 SYMLINK libspdk.so 00:04:50.748 CC test/rpc_client/rpc_client_test.o 00:04:50.748 TEST_HEADER include/spdk/accel.h 00:04:50.748 TEST_HEADER include/spdk/accel_module.h 00:04:50.748 TEST_HEADER include/spdk/assert.h 00:04:50.748 TEST_HEADER include/spdk/barrier.h 00:04:50.748 TEST_HEADER include/spdk/base64.h 00:04:50.748 TEST_HEADER include/spdk/bdev.h 00:04:50.748 TEST_HEADER include/spdk/bdev_module.h 00:04:50.748 TEST_HEADER include/spdk/bdev_zone.h 00:04:50.748 TEST_HEADER include/spdk/bit_array.h 00:04:50.748 CXX app/trace/trace.o 00:04:50.748 TEST_HEADER include/spdk/bit_pool.h 00:04:50.748 TEST_HEADER include/spdk/blob_bdev.h 00:04:50.748 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:50.748 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:50.748 TEST_HEADER include/spdk/blobfs.h 00:04:50.748 TEST_HEADER include/spdk/blob.h 00:04:50.748 TEST_HEADER include/spdk/conf.h 00:04:50.748 TEST_HEADER include/spdk/config.h 00:04:50.748 TEST_HEADER include/spdk/cpuset.h 00:04:50.748 TEST_HEADER include/spdk/crc16.h 00:04:50.748 TEST_HEADER include/spdk/crc32.h 00:04:50.748 TEST_HEADER include/spdk/crc64.h 00:04:50.748 TEST_HEADER include/spdk/dif.h 00:04:50.748 TEST_HEADER include/spdk/dma.h 00:04:50.748 TEST_HEADER include/spdk/endian.h 00:04:50.748 TEST_HEADER include/spdk/env_dpdk.h 00:04:50.748 TEST_HEADER include/spdk/env.h 00:04:50.748 TEST_HEADER include/spdk/event.h 00:04:50.748 TEST_HEADER include/spdk/fd_group.h 00:04:50.748 TEST_HEADER include/spdk/fd.h 00:04:50.748 TEST_HEADER include/spdk/file.h 00:04:50.748 TEST_HEADER include/spdk/fsdev.h 00:04:50.748 TEST_HEADER include/spdk/fsdev_module.h 00:04:50.748 TEST_HEADER include/spdk/ftl.h 00:04:50.748 TEST_HEADER include/spdk/gpt_spec.h 00:04:50.748 CC test/thread/poller_perf/poller_perf.o 00:04:50.748 TEST_HEADER include/spdk/hexlify.h 00:04:50.748 TEST_HEADER include/spdk/histogram_data.h 00:04:50.748 TEST_HEADER include/spdk/idxd.h 00:04:50.748 TEST_HEADER include/spdk/idxd_spec.h 00:04:50.748 CC examples/util/zipf/zipf.o 00:04:50.748 TEST_HEADER include/spdk/init.h 00:04:50.748 TEST_HEADER include/spdk/ioat.h 00:04:50.749 TEST_HEADER include/spdk/ioat_spec.h 00:04:50.749 TEST_HEADER include/spdk/iscsi_spec.h 00:04:50.749 CC examples/ioat/perf/perf.o 00:04:50.749 TEST_HEADER include/spdk/json.h 00:04:50.749 TEST_HEADER include/spdk/jsonrpc.h 00:04:50.749 TEST_HEADER include/spdk/keyring.h 00:04:51.008 TEST_HEADER include/spdk/keyring_module.h 00:04:51.008 TEST_HEADER include/spdk/likely.h 00:04:51.008 TEST_HEADER include/spdk/log.h 00:04:51.008 TEST_HEADER include/spdk/lvol.h 00:04:51.008 TEST_HEADER include/spdk/md5.h 00:04:51.008 TEST_HEADER include/spdk/memory.h 00:04:51.008 TEST_HEADER include/spdk/mmio.h 00:04:51.008 TEST_HEADER include/spdk/nbd.h 00:04:51.008 TEST_HEADER include/spdk/net.h 00:04:51.008 TEST_HEADER include/spdk/notify.h 00:04:51.008 TEST_HEADER include/spdk/nvme.h 00:04:51.008 TEST_HEADER include/spdk/nvme_intel.h 00:04:51.008 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:51.008 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:51.008 TEST_HEADER include/spdk/nvme_spec.h 00:04:51.008 TEST_HEADER include/spdk/nvme_zns.h 00:04:51.008 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:51.008 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:51.008 TEST_HEADER include/spdk/nvmf.h 00:04:51.008 TEST_HEADER include/spdk/nvmf_spec.h 00:04:51.008 TEST_HEADER include/spdk/nvmf_transport.h 00:04:51.008 TEST_HEADER include/spdk/opal.h 00:04:51.008 TEST_HEADER include/spdk/opal_spec.h 00:04:51.008 TEST_HEADER include/spdk/pci_ids.h 00:04:51.008 TEST_HEADER include/spdk/pipe.h 00:04:51.008 TEST_HEADER include/spdk/queue.h 00:04:51.008 TEST_HEADER include/spdk/reduce.h 00:04:51.008 TEST_HEADER include/spdk/rpc.h 00:04:51.008 TEST_HEADER include/spdk/scheduler.h 00:04:51.008 TEST_HEADER include/spdk/scsi.h 00:04:51.008 TEST_HEADER include/spdk/scsi_spec.h 00:04:51.008 TEST_HEADER include/spdk/sock.h 00:04:51.008 TEST_HEADER include/spdk/stdinc.h 00:04:51.008 TEST_HEADER include/spdk/string.h 00:04:51.008 TEST_HEADER include/spdk/thread.h 00:04:51.008 CC test/env/mem_callbacks/mem_callbacks.o 00:04:51.008 TEST_HEADER include/spdk/trace.h 00:04:51.008 TEST_HEADER include/spdk/trace_parser.h 00:04:51.008 TEST_HEADER include/spdk/tree.h 00:04:51.008 CC test/dma/test_dma/test_dma.o 00:04:51.008 TEST_HEADER include/spdk/ublk.h 00:04:51.008 TEST_HEADER include/spdk/util.h 00:04:51.008 CC test/app/bdev_svc/bdev_svc.o 00:04:51.008 TEST_HEADER include/spdk/uuid.h 00:04:51.008 TEST_HEADER include/spdk/version.h 00:04:51.008 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:51.008 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:51.008 TEST_HEADER include/spdk/vhost.h 00:04:51.008 TEST_HEADER include/spdk/vmd.h 00:04:51.008 TEST_HEADER include/spdk/xor.h 00:04:51.008 TEST_HEADER include/spdk/zipf.h 00:04:51.008 CXX test/cpp_headers/accel.o 00:04:51.008 LINK zipf 00:04:51.008 LINK rpc_client_test 00:04:51.008 LINK interrupt_tgt 00:04:51.008 LINK poller_perf 00:04:51.008 LINK ioat_perf 00:04:51.267 LINK bdev_svc 00:04:51.267 CXX test/cpp_headers/accel_module.o 00:04:51.267 LINK spdk_trace 00:04:51.267 CC test/env/vtophys/vtophys.o 00:04:51.267 CC examples/ioat/verify/verify.o 00:04:51.267 CXX test/cpp_headers/assert.o 00:04:51.526 CXX test/cpp_headers/barrier.o 00:04:51.526 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:51.526 CC test/env/memory/memory_ut.o 00:04:51.526 LINK vtophys 00:04:51.526 LINK mem_callbacks 00:04:51.526 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:51.526 LINK test_dma 00:04:51.526 CC app/trace_record/trace_record.o 00:04:51.526 CXX test/cpp_headers/base64.o 00:04:51.526 LINK verify 00:04:51.785 CXX test/cpp_headers/bdev.o 00:04:51.785 CC test/env/pci/pci_ut.o 00:04:51.785 CXX test/cpp_headers/bdev_module.o 00:04:51.785 LINK env_dpdk_post_init 00:04:51.785 CC test/app/histogram_perf/histogram_perf.o 00:04:51.785 CXX test/cpp_headers/bdev_zone.o 00:04:51.785 LINK spdk_trace_record 00:04:52.050 CC app/nvmf_tgt/nvmf_main.o 00:04:52.050 CC examples/sock/hello_world/hello_sock.o 00:04:52.050 CC examples/thread/thread/thread_ex.o 00:04:52.050 LINK histogram_perf 00:04:52.050 LINK nvme_fuzz 00:04:52.050 CC examples/vmd/lsvmd/lsvmd.o 00:04:52.050 CXX test/cpp_headers/bit_array.o 00:04:52.050 LINK pci_ut 00:04:52.323 LINK nvmf_tgt 00:04:52.323 CXX test/cpp_headers/bit_pool.o 00:04:52.323 CC examples/idxd/perf/perf.o 00:04:52.324 LINK lsvmd 00:04:52.324 LINK thread 00:04:52.324 LINK hello_sock 00:04:52.324 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:52.324 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:52.324 CXX test/cpp_headers/blob_bdev.o 00:04:52.582 CXX test/cpp_headers/blobfs_bdev.o 00:04:52.582 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:52.582 CC app/iscsi_tgt/iscsi_tgt.o 00:04:52.582 CC examples/vmd/led/led.o 00:04:52.582 LINK idxd_perf 00:04:52.582 CXX test/cpp_headers/blobfs.o 00:04:52.582 CC test/event/event_perf/event_perf.o 00:04:52.840 CC test/nvme/aer/aer.o 00:04:52.840 CC app/spdk_tgt/spdk_tgt.o 00:04:52.840 LINK led 00:04:52.840 LINK iscsi_tgt 00:04:52.840 LINK memory_ut 00:04:52.840 LINK event_perf 00:04:52.840 CXX test/cpp_headers/blob.o 00:04:52.840 CC test/nvme/reset/reset.o 00:04:52.840 LINK spdk_tgt 00:04:53.099 CXX test/cpp_headers/conf.o 00:04:53.099 LINK vhost_fuzz 00:04:53.099 CXX test/cpp_headers/config.o 00:04:53.099 LINK aer 00:04:53.099 CC test/event/reactor/reactor.o 00:04:53.099 CC examples/nvme/hello_world/hello_world.o 00:04:53.099 CC test/nvme/sgl/sgl.o 00:04:53.099 LINK reset 00:04:53.099 CXX test/cpp_headers/cpuset.o 00:04:53.099 CC examples/fsdev/hello_world/hello_fsdev.o 00:04:53.359 LINK reactor 00:04:53.359 CC app/spdk_lspci/spdk_lspci.o 00:04:53.359 CC test/app/jsoncat/jsoncat.o 00:04:53.359 CXX test/cpp_headers/crc16.o 00:04:53.359 LINK hello_world 00:04:53.359 CC examples/accel/perf/accel_perf.o 00:04:53.359 LINK spdk_lspci 00:04:53.359 CC test/event/reactor_perf/reactor_perf.o 00:04:53.618 LINK sgl 00:04:53.618 CC app/spdk_nvme_perf/perf.o 00:04:53.618 LINK jsoncat 00:04:53.618 LINK hello_fsdev 00:04:53.618 CXX test/cpp_headers/crc32.o 00:04:53.618 LINK reactor_perf 00:04:53.618 CC examples/nvme/reconnect/reconnect.o 00:04:53.618 CXX test/cpp_headers/crc64.o 00:04:53.877 CC test/event/app_repeat/app_repeat.o 00:04:53.877 CC app/spdk_nvme_identify/identify.o 00:04:53.877 CC test/nvme/e2edp/nvme_dp.o 00:04:53.877 CC app/spdk_nvme_discover/discovery_aer.o 00:04:53.877 CC app/spdk_top/spdk_top.o 00:04:53.877 CXX test/cpp_headers/dif.o 00:04:53.877 LINK app_repeat 00:04:53.877 LINK accel_perf 00:04:54.136 LINK spdk_nvme_discover 00:04:54.136 CXX test/cpp_headers/dma.o 00:04:54.136 LINK reconnect 00:04:54.136 LINK nvme_dp 00:04:54.136 CC test/event/scheduler/scheduler.o 00:04:54.136 CXX test/cpp_headers/endian.o 00:04:54.397 CC app/vhost/vhost.o 00:04:54.397 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:54.397 CC test/nvme/overhead/overhead.o 00:04:54.397 CXX test/cpp_headers/env_dpdk.o 00:04:54.397 CC examples/blob/hello_world/hello_blob.o 00:04:54.397 LINK scheduler 00:04:54.397 LINK iscsi_fuzz 00:04:54.397 LINK vhost 00:04:54.663 LINK spdk_nvme_perf 00:04:54.663 CXX test/cpp_headers/env.o 00:04:54.663 LINK hello_blob 00:04:54.663 CXX test/cpp_headers/event.o 00:04:54.663 LINK overhead 00:04:54.921 CC examples/nvme/arbitration/arbitration.o 00:04:54.921 LINK spdk_nvme_identify 00:04:54.921 CC test/app/stub/stub.o 00:04:54.921 CC examples/blob/cli/blobcli.o 00:04:54.921 CXX test/cpp_headers/fd_group.o 00:04:54.921 CC examples/bdev/hello_world/hello_bdev.o 00:04:54.921 CXX test/cpp_headers/fd.o 00:04:54.921 LINK spdk_top 00:04:54.921 LINK nvme_manage 00:04:54.921 CC test/nvme/err_injection/err_injection.o 00:04:54.921 LINK stub 00:04:55.179 CC test/nvme/startup/startup.o 00:04:55.179 CXX test/cpp_headers/file.o 00:04:55.179 CC test/nvme/reserve/reserve.o 00:04:55.179 LINK hello_bdev 00:04:55.179 LINK arbitration 00:04:55.179 LINK err_injection 00:04:55.179 CC test/nvme/simple_copy/simple_copy.o 00:04:55.179 CC app/spdk_dd/spdk_dd.o 00:04:55.179 CXX test/cpp_headers/fsdev.o 00:04:55.179 CC test/nvme/connect_stress/connect_stress.o 00:04:55.179 LINK startup 00:04:55.436 LINK blobcli 00:04:55.436 LINK reserve 00:04:55.436 CXX test/cpp_headers/fsdev_module.o 00:04:55.436 CXX test/cpp_headers/ftl.o 00:04:55.436 CC test/nvme/boot_partition/boot_partition.o 00:04:55.436 LINK connect_stress 00:04:55.436 CC examples/nvme/hotplug/hotplug.o 00:04:55.436 LINK simple_copy 00:04:55.436 CC examples/bdev/bdevperf/bdevperf.o 00:04:55.695 CXX test/cpp_headers/gpt_spec.o 00:04:55.695 LINK boot_partition 00:04:55.695 CC test/nvme/compliance/nvme_compliance.o 00:04:55.695 LINK spdk_dd 00:04:55.695 CXX test/cpp_headers/hexlify.o 00:04:55.695 CXX test/cpp_headers/histogram_data.o 00:04:55.695 LINK hotplug 00:04:55.695 CXX test/cpp_headers/idxd.o 00:04:55.695 CC test/nvme/fused_ordering/fused_ordering.o 00:04:55.954 CXX test/cpp_headers/idxd_spec.o 00:04:55.954 CC app/fio/nvme/fio_plugin.o 00:04:55.954 CXX test/cpp_headers/init.o 00:04:55.954 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:55.954 CC examples/nvme/abort/abort.o 00:04:55.954 LINK fused_ordering 00:04:55.954 CXX test/cpp_headers/ioat.o 00:04:55.954 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:56.213 LINK nvme_compliance 00:04:56.213 CC app/fio/bdev/fio_plugin.o 00:04:56.213 CXX test/cpp_headers/ioat_spec.o 00:04:56.213 LINK cmb_copy 00:04:56.213 CXX test/cpp_headers/iscsi_spec.o 00:04:56.213 LINK pmr_persistence 00:04:56.471 CC test/accel/dif/dif.o 00:04:56.471 CXX test/cpp_headers/json.o 00:04:56.471 CXX test/cpp_headers/jsonrpc.o 00:04:56.471 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:56.471 CC test/nvme/fdp/fdp.o 00:04:56.471 CC test/nvme/cuse/cuse.o 00:04:56.471 LINK abort 00:04:56.471 LINK spdk_nvme 00:04:56.471 LINK bdevperf 00:04:56.471 CXX test/cpp_headers/keyring.o 00:04:56.729 CXX test/cpp_headers/keyring_module.o 00:04:56.729 LINK doorbell_aers 00:04:56.729 CXX test/cpp_headers/likely.o 00:04:56.729 LINK spdk_bdev 00:04:56.729 CC test/blobfs/mkfs/mkfs.o 00:04:56.729 CXX test/cpp_headers/log.o 00:04:56.987 CXX test/cpp_headers/lvol.o 00:04:56.987 CXX test/cpp_headers/md5.o 00:04:56.988 CXX test/cpp_headers/memory.o 00:04:56.988 LINK fdp 00:04:56.988 CC test/lvol/esnap/esnap.o 00:04:56.988 CXX test/cpp_headers/mmio.o 00:04:56.988 CC examples/nvmf/nvmf/nvmf.o 00:04:56.988 LINK mkfs 00:04:56.988 CXX test/cpp_headers/nbd.o 00:04:56.988 CXX test/cpp_headers/net.o 00:04:56.988 CXX test/cpp_headers/notify.o 00:04:56.988 CXX test/cpp_headers/nvme.o 00:04:56.988 CXX test/cpp_headers/nvme_intel.o 00:04:57.246 CXX test/cpp_headers/nvme_ocssd.o 00:04:57.246 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:57.246 CXX test/cpp_headers/nvme_spec.o 00:04:57.246 CXX test/cpp_headers/nvme_zns.o 00:04:57.246 CXX test/cpp_headers/nvmf_cmd.o 00:04:57.246 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:57.246 LINK dif 00:04:57.246 LINK nvmf 00:04:57.246 CXX test/cpp_headers/nvmf.o 00:04:57.504 CXX test/cpp_headers/nvmf_spec.o 00:04:57.504 CXX test/cpp_headers/nvmf_transport.o 00:04:57.504 CXX test/cpp_headers/opal.o 00:04:57.504 CXX test/cpp_headers/opal_spec.o 00:04:57.504 CXX test/cpp_headers/pci_ids.o 00:04:57.504 CXX test/cpp_headers/pipe.o 00:04:57.504 CXX test/cpp_headers/queue.o 00:04:57.504 CXX test/cpp_headers/reduce.o 00:04:57.504 CXX test/cpp_headers/rpc.o 00:04:57.762 CXX test/cpp_headers/scheduler.o 00:04:57.762 CXX test/cpp_headers/scsi.o 00:04:57.762 CXX test/cpp_headers/scsi_spec.o 00:04:57.762 CXX test/cpp_headers/sock.o 00:04:57.762 CXX test/cpp_headers/stdinc.o 00:04:57.762 CXX test/cpp_headers/string.o 00:04:57.762 CXX test/cpp_headers/thread.o 00:04:57.762 CXX test/cpp_headers/trace.o 00:04:57.762 CXX test/cpp_headers/trace_parser.o 00:04:57.762 CC test/bdev/bdevio/bdevio.o 00:04:57.762 CXX test/cpp_headers/tree.o 00:04:57.762 CXX test/cpp_headers/ublk.o 00:04:57.762 CXX test/cpp_headers/util.o 00:04:58.021 CXX test/cpp_headers/uuid.o 00:04:58.021 CXX test/cpp_headers/version.o 00:04:58.021 CXX test/cpp_headers/vfio_user_pci.o 00:04:58.021 CXX test/cpp_headers/vfio_user_spec.o 00:04:58.021 CXX test/cpp_headers/vhost.o 00:04:58.021 CXX test/cpp_headers/vmd.o 00:04:58.021 LINK cuse 00:04:58.021 CXX test/cpp_headers/xor.o 00:04:58.021 CXX test/cpp_headers/zipf.o 00:04:58.279 LINK bdevio 00:05:03.542 LINK esnap 00:05:03.542 00:05:03.542 real 1m33.165s 00:05:03.542 user 8m31.455s 00:05:03.542 sys 1m39.193s 00:05:03.542 21:32:03 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:05:03.542 21:32:03 make -- common/autotest_common.sh@10 -- $ set +x 00:05:03.542 ************************************ 00:05:03.542 END TEST make 00:05:03.542 ************************************ 00:05:03.542 21:32:03 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:05:03.542 21:32:03 -- pm/common@29 -- $ signal_monitor_resources TERM 00:05:03.542 21:32:03 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:05:03.543 21:32:03 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:03.543 21:32:03 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:05:03.543 21:32:03 -- pm/common@44 -- $ pid=5463 00:05:03.543 21:32:03 -- pm/common@50 -- $ kill -TERM 5463 00:05:03.543 21:32:03 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:03.543 21:32:03 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:05:03.543 21:32:03 -- pm/common@44 -- $ pid=5465 00:05:03.543 21:32:03 -- pm/common@50 -- $ kill -TERM 5465 00:05:03.543 21:32:03 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:05:03.543 21:32:03 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:05:03.543 21:32:03 -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:03.543 21:32:03 -- common/autotest_common.sh@1711 -- # lcov --version 00:05:03.543 21:32:03 -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:03.543 21:32:03 -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:03.543 21:32:03 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:03.543 21:32:03 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:03.543 21:32:03 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:03.543 21:32:03 -- scripts/common.sh@336 -- # IFS=.-: 00:05:03.543 21:32:03 -- scripts/common.sh@336 -- # read -ra ver1 00:05:03.543 21:32:03 -- scripts/common.sh@337 -- # IFS=.-: 00:05:03.543 21:32:03 -- scripts/common.sh@337 -- # read -ra ver2 00:05:03.543 21:32:03 -- scripts/common.sh@338 -- # local 'op=<' 00:05:03.543 21:32:03 -- scripts/common.sh@340 -- # ver1_l=2 00:05:03.543 21:32:03 -- scripts/common.sh@341 -- # ver2_l=1 00:05:03.543 21:32:03 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:03.543 21:32:03 -- scripts/common.sh@344 -- # case "$op" in 00:05:03.543 21:32:03 -- scripts/common.sh@345 -- # : 1 00:05:03.543 21:32:03 -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:03.543 21:32:03 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:03.543 21:32:03 -- scripts/common.sh@365 -- # decimal 1 00:05:03.543 21:32:03 -- scripts/common.sh@353 -- # local d=1 00:05:03.543 21:32:03 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:03.543 21:32:03 -- scripts/common.sh@355 -- # echo 1 00:05:03.543 21:32:03 -- scripts/common.sh@365 -- # ver1[v]=1 00:05:03.543 21:32:03 -- scripts/common.sh@366 -- # decimal 2 00:05:03.543 21:32:03 -- scripts/common.sh@353 -- # local d=2 00:05:03.543 21:32:03 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:03.543 21:32:03 -- scripts/common.sh@355 -- # echo 2 00:05:03.543 21:32:03 -- scripts/common.sh@366 -- # ver2[v]=2 00:05:03.543 21:32:03 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:03.543 21:32:03 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:03.543 21:32:03 -- scripts/common.sh@368 -- # return 0 00:05:03.543 21:32:03 -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:03.543 21:32:03 -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:03.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.543 --rc genhtml_branch_coverage=1 00:05:03.543 --rc genhtml_function_coverage=1 00:05:03.543 --rc genhtml_legend=1 00:05:03.543 --rc geninfo_all_blocks=1 00:05:03.543 --rc geninfo_unexecuted_blocks=1 00:05:03.543 00:05:03.543 ' 00:05:03.543 21:32:03 -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:03.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.543 --rc genhtml_branch_coverage=1 00:05:03.543 --rc genhtml_function_coverage=1 00:05:03.543 --rc genhtml_legend=1 00:05:03.543 --rc geninfo_all_blocks=1 00:05:03.543 --rc geninfo_unexecuted_blocks=1 00:05:03.543 00:05:03.543 ' 00:05:03.543 21:32:03 -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:03.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.543 --rc genhtml_branch_coverage=1 00:05:03.543 --rc genhtml_function_coverage=1 00:05:03.543 --rc genhtml_legend=1 00:05:03.543 --rc geninfo_all_blocks=1 00:05:03.543 --rc geninfo_unexecuted_blocks=1 00:05:03.543 00:05:03.543 ' 00:05:03.543 21:32:03 -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:03.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.543 --rc genhtml_branch_coverage=1 00:05:03.543 --rc genhtml_function_coverage=1 00:05:03.543 --rc genhtml_legend=1 00:05:03.543 --rc geninfo_all_blocks=1 00:05:03.543 --rc geninfo_unexecuted_blocks=1 00:05:03.543 00:05:03.543 ' 00:05:03.543 21:32:03 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:03.543 21:32:04 -- nvmf/common.sh@7 -- # uname -s 00:05:03.543 21:32:04 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:03.543 21:32:04 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:03.543 21:32:04 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:03.543 21:32:04 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:03.543 21:32:04 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:03.543 21:32:04 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:03.543 21:32:04 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:03.543 21:32:04 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:03.543 21:32:04 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:03.543 21:32:04 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:03.543 21:32:04 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2e0d89c1-5aa5-4136-8af2-b7f6369ef5ad 00:05:03.543 21:32:04 -- nvmf/common.sh@18 -- # NVME_HOSTID=2e0d89c1-5aa5-4136-8af2-b7f6369ef5ad 00:05:03.543 21:32:04 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:03.543 21:32:04 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:03.543 21:32:04 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:03.543 21:32:04 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:03.543 21:32:04 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:03.543 21:32:04 -- scripts/common.sh@15 -- # shopt -s extglob 00:05:03.543 21:32:04 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:03.543 21:32:04 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:03.543 21:32:04 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:03.543 21:32:04 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:03.543 21:32:04 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:03.543 21:32:04 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:03.543 21:32:04 -- paths/export.sh@5 -- # export PATH 00:05:03.543 21:32:04 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:03.543 21:32:04 -- nvmf/common.sh@51 -- # : 0 00:05:03.543 21:32:04 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:03.543 21:32:04 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:03.543 21:32:04 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:03.543 21:32:04 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:03.543 21:32:04 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:03.543 21:32:04 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:03.543 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:03.543 21:32:04 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:03.543 21:32:04 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:03.543 21:32:04 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:03.543 21:32:04 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:05:03.543 21:32:04 -- spdk/autotest.sh@32 -- # uname -s 00:05:03.543 21:32:04 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:05:03.543 21:32:04 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:05:03.543 21:32:04 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:05:03.543 21:32:04 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:05:03.543 21:32:04 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:05:03.543 21:32:04 -- spdk/autotest.sh@44 -- # modprobe nbd 00:05:03.543 21:32:04 -- spdk/autotest.sh@46 -- # type -P udevadm 00:05:03.543 21:32:04 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:05:03.543 21:32:04 -- spdk/autotest.sh@48 -- # udevadm_pid=54473 00:05:03.543 21:32:04 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:05:03.543 21:32:04 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:05:03.543 21:32:04 -- pm/common@17 -- # local monitor 00:05:03.543 21:32:04 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:03.543 21:32:04 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:03.543 21:32:04 -- pm/common@25 -- # sleep 1 00:05:03.543 21:32:04 -- pm/common@21 -- # date +%s 00:05:03.543 21:32:04 -- pm/common@21 -- # date +%s 00:05:03.543 21:32:04 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1733866324 00:05:03.543 21:32:04 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1733866324 00:05:03.543 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1733866324_collect-cpu-load.pm.log 00:05:03.543 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1733866324_collect-vmstat.pm.log 00:05:04.482 21:32:05 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:05:04.482 21:32:05 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:05:04.482 21:32:05 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:04.482 21:32:05 -- common/autotest_common.sh@10 -- # set +x 00:05:04.482 21:32:05 -- spdk/autotest.sh@59 -- # create_test_list 00:05:04.482 21:32:05 -- common/autotest_common.sh@752 -- # xtrace_disable 00:05:04.482 21:32:05 -- common/autotest_common.sh@10 -- # set +x 00:05:04.482 21:32:05 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:05:04.482 21:32:05 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:05:04.482 21:32:05 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:05:04.482 21:32:05 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:05:04.482 21:32:05 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:05:04.482 21:32:05 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:05:04.482 21:32:05 -- common/autotest_common.sh@1457 -- # uname 00:05:04.482 21:32:05 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:05:04.482 21:32:05 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:05:04.482 21:32:05 -- common/autotest_common.sh@1477 -- # uname 00:05:04.482 21:32:05 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:05:04.482 21:32:05 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:05:04.482 21:32:05 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:05:04.744 lcov: LCOV version 1.15 00:05:04.744 21:32:05 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:05:19.645 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:05:19.645 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:05:34.551 21:32:34 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:05:34.551 21:32:34 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:34.551 21:32:34 -- common/autotest_common.sh@10 -- # set +x 00:05:34.551 21:32:34 -- spdk/autotest.sh@78 -- # rm -f 00:05:34.551 21:32:34 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:34.551 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:34.809 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:05:34.809 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:05:34.809 21:32:35 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:05:34.809 21:32:35 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:05:34.809 21:32:35 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:05:34.809 21:32:35 -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:05:34.809 21:32:35 -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:05:34.809 21:32:35 -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:05:34.809 21:32:35 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:05:34.809 21:32:35 -- common/autotest_common.sh@1669 -- # bdf=0000:00:10.0 00:05:34.809 21:32:35 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:05:34.809 21:32:35 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:05:34.809 21:32:35 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:05:34.809 21:32:35 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:34.809 21:32:35 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:34.810 21:32:35 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:05:34.810 21:32:35 -- common/autotest_common.sh@1669 -- # bdf=0000:00:11.0 00:05:34.810 21:32:35 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:05:34.810 21:32:35 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n1 00:05:34.810 21:32:35 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:05:34.810 21:32:35 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:34.810 21:32:35 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:34.810 21:32:35 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:05:34.810 21:32:35 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n2 00:05:34.810 21:32:35 -- common/autotest_common.sh@1650 -- # local device=nvme1n2 00:05:34.810 21:32:35 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:05:34.810 21:32:35 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:34.810 21:32:35 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:05:34.810 21:32:35 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n3 00:05:34.810 21:32:35 -- common/autotest_common.sh@1650 -- # local device=nvme1n3 00:05:34.810 21:32:35 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:05:34.810 21:32:35 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:34.810 21:32:35 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:05:34.810 21:32:35 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:34.810 21:32:35 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:34.810 21:32:35 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:05:34.810 21:32:35 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:05:34.810 21:32:35 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:05:34.810 No valid GPT data, bailing 00:05:34.810 21:32:35 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:34.810 21:32:35 -- scripts/common.sh@394 -- # pt= 00:05:34.810 21:32:35 -- scripts/common.sh@395 -- # return 1 00:05:34.810 21:32:35 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:05:34.810 1+0 records in 00:05:34.810 1+0 records out 00:05:34.810 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00685229 s, 153 MB/s 00:05:34.810 21:32:35 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:34.810 21:32:35 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:34.810 21:32:35 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:05:34.810 21:32:35 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:05:34.810 21:32:35 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:05:35.069 No valid GPT data, bailing 00:05:35.069 21:32:35 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:05:35.069 21:32:35 -- scripts/common.sh@394 -- # pt= 00:05:35.069 21:32:35 -- scripts/common.sh@395 -- # return 1 00:05:35.070 21:32:35 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:05:35.070 1+0 records in 00:05:35.070 1+0 records out 00:05:35.070 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0068718 s, 153 MB/s 00:05:35.070 21:32:35 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:35.070 21:32:35 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:35.070 21:32:35 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:05:35.070 21:32:35 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:05:35.070 21:32:35 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:05:35.070 No valid GPT data, bailing 00:05:35.070 21:32:35 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:05:35.070 21:32:35 -- scripts/common.sh@394 -- # pt= 00:05:35.070 21:32:35 -- scripts/common.sh@395 -- # return 1 00:05:35.070 21:32:35 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:05:35.070 1+0 records in 00:05:35.070 1+0 records out 00:05:35.070 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00627686 s, 167 MB/s 00:05:35.070 21:32:35 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:35.070 21:32:35 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:35.070 21:32:35 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:05:35.070 21:32:35 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:05:35.070 21:32:35 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:05:35.070 No valid GPT data, bailing 00:05:35.070 21:32:35 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:05:35.070 21:32:35 -- scripts/common.sh@394 -- # pt= 00:05:35.070 21:32:35 -- scripts/common.sh@395 -- # return 1 00:05:35.070 21:32:35 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:05:35.070 1+0 records in 00:05:35.070 1+0 records out 00:05:35.070 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0052033 s, 202 MB/s 00:05:35.070 21:32:35 -- spdk/autotest.sh@105 -- # sync 00:05:35.329 21:32:35 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:05:35.329 21:32:35 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:05:35.329 21:32:35 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:05:38.627 21:32:38 -- spdk/autotest.sh@111 -- # uname -s 00:05:38.627 21:32:38 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:05:38.627 21:32:38 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:05:38.627 21:32:38 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:38.886 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:38.886 Hugepages 00:05:38.886 node hugesize free / total 00:05:38.886 node0 1048576kB 0 / 0 00:05:38.886 node0 2048kB 0 / 0 00:05:38.886 00:05:38.886 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:38.886 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:05:39.145 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:05:39.145 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:05:39.145 21:32:39 -- spdk/autotest.sh@117 -- # uname -s 00:05:39.145 21:32:39 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:05:39.145 21:32:39 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:05:39.145 21:32:39 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:40.091 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:40.091 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:40.091 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:40.351 21:32:40 -- common/autotest_common.sh@1517 -- # sleep 1 00:05:41.290 21:32:41 -- common/autotest_common.sh@1518 -- # bdfs=() 00:05:41.290 21:32:41 -- common/autotest_common.sh@1518 -- # local bdfs 00:05:41.290 21:32:41 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:05:41.290 21:32:41 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:05:41.290 21:32:41 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:41.290 21:32:41 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:41.290 21:32:41 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:41.290 21:32:41 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:41.290 21:32:41 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:41.290 21:32:42 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:05:41.290 21:32:42 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:05:41.290 21:32:42 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:41.858 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:41.858 Waiting for block devices as requested 00:05:41.858 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:05:42.117 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:05:42.117 21:32:42 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:05:42.117 21:32:42 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:05:42.117 21:32:42 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:05:42.117 21:32:42 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:05:42.117 21:32:42 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:42.117 21:32:42 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:05:42.117 21:32:42 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:42.117 21:32:42 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:05:42.117 21:32:42 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:05:42.117 21:32:42 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:05:42.117 21:32:42 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:05:42.117 21:32:42 -- common/autotest_common.sh@1531 -- # grep oacs 00:05:42.117 21:32:42 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:05:42.117 21:32:42 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:05:42.117 21:32:42 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:05:42.117 21:32:42 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:05:42.117 21:32:42 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:05:42.117 21:32:42 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:05:42.117 21:32:42 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:42.117 21:32:42 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:05:42.117 21:32:42 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:05:42.117 21:32:42 -- common/autotest_common.sh@1543 -- # continue 00:05:42.117 21:32:42 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:05:42.117 21:32:42 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:05:42.117 21:32:42 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:05:42.117 21:32:42 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:05:42.117 21:32:42 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:42.117 21:32:42 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:05:42.117 21:32:42 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:42.117 21:32:42 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:05:42.117 21:32:42 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:05:42.117 21:32:42 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:05:42.117 21:32:42 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:05:42.117 21:32:42 -- common/autotest_common.sh@1531 -- # grep oacs 00:05:42.117 21:32:42 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:05:42.117 21:32:42 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:05:42.117 21:32:42 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:05:42.117 21:32:42 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:05:42.117 21:32:42 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:05:42.117 21:32:42 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:05:42.117 21:32:42 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:42.117 21:32:42 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:05:42.117 21:32:42 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:05:42.117 21:32:42 -- common/autotest_common.sh@1543 -- # continue 00:05:42.117 21:32:42 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:05:42.117 21:32:42 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:42.117 21:32:42 -- common/autotest_common.sh@10 -- # set +x 00:05:42.117 21:32:42 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:05:42.117 21:32:42 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:42.117 21:32:42 -- common/autotest_common.sh@10 -- # set +x 00:05:42.117 21:32:42 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:43.055 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:43.055 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:43.313 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:43.313 21:32:43 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:05:43.313 21:32:43 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:43.313 21:32:43 -- common/autotest_common.sh@10 -- # set +x 00:05:43.313 21:32:43 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:05:43.313 21:32:43 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:05:43.313 21:32:43 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:05:43.313 21:32:43 -- common/autotest_common.sh@1563 -- # bdfs=() 00:05:43.313 21:32:43 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:05:43.313 21:32:43 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:05:43.313 21:32:43 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:05:43.313 21:32:43 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:05:43.313 21:32:43 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:43.313 21:32:43 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:43.313 21:32:43 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:43.313 21:32:43 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:43.313 21:32:43 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:43.313 21:32:44 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:05:43.313 21:32:44 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:05:43.313 21:32:44 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:05:43.313 21:32:44 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:05:43.313 21:32:44 -- common/autotest_common.sh@1566 -- # device=0x0010 00:05:43.313 21:32:44 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:43.313 21:32:44 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:05:43.313 21:32:44 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:05:43.313 21:32:44 -- common/autotest_common.sh@1566 -- # device=0x0010 00:05:43.314 21:32:44 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:43.314 21:32:44 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:05:43.314 21:32:44 -- common/autotest_common.sh@1572 -- # return 0 00:05:43.314 21:32:44 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:05:43.314 21:32:44 -- common/autotest_common.sh@1580 -- # return 0 00:05:43.314 21:32:44 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:05:43.314 21:32:44 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:05:43.314 21:32:44 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:43.314 21:32:44 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:43.314 21:32:44 -- spdk/autotest.sh@149 -- # timing_enter lib 00:05:43.314 21:32:44 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:43.314 21:32:44 -- common/autotest_common.sh@10 -- # set +x 00:05:43.314 21:32:44 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:05:43.314 21:32:44 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:43.314 21:32:44 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:43.314 21:32:44 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:43.314 21:32:44 -- common/autotest_common.sh@10 -- # set +x 00:05:43.573 ************************************ 00:05:43.573 START TEST env 00:05:43.573 ************************************ 00:05:43.573 21:32:44 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:43.573 * Looking for test storage... 00:05:43.573 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:05:43.573 21:32:44 env -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:43.573 21:32:44 env -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:43.573 21:32:44 env -- common/autotest_common.sh@1711 -- # lcov --version 00:05:43.573 21:32:44 env -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:43.573 21:32:44 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:43.573 21:32:44 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:43.573 21:32:44 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:43.573 21:32:44 env -- scripts/common.sh@336 -- # IFS=.-: 00:05:43.573 21:32:44 env -- scripts/common.sh@336 -- # read -ra ver1 00:05:43.573 21:32:44 env -- scripts/common.sh@337 -- # IFS=.-: 00:05:43.573 21:32:44 env -- scripts/common.sh@337 -- # read -ra ver2 00:05:43.573 21:32:44 env -- scripts/common.sh@338 -- # local 'op=<' 00:05:43.573 21:32:44 env -- scripts/common.sh@340 -- # ver1_l=2 00:05:43.573 21:32:44 env -- scripts/common.sh@341 -- # ver2_l=1 00:05:43.573 21:32:44 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:43.573 21:32:44 env -- scripts/common.sh@344 -- # case "$op" in 00:05:43.573 21:32:44 env -- scripts/common.sh@345 -- # : 1 00:05:43.573 21:32:44 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:43.573 21:32:44 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:43.573 21:32:44 env -- scripts/common.sh@365 -- # decimal 1 00:05:43.573 21:32:44 env -- scripts/common.sh@353 -- # local d=1 00:05:43.573 21:32:44 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:43.573 21:32:44 env -- scripts/common.sh@355 -- # echo 1 00:05:43.573 21:32:44 env -- scripts/common.sh@365 -- # ver1[v]=1 00:05:43.573 21:32:44 env -- scripts/common.sh@366 -- # decimal 2 00:05:43.573 21:32:44 env -- scripts/common.sh@353 -- # local d=2 00:05:43.573 21:32:44 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:43.573 21:32:44 env -- scripts/common.sh@355 -- # echo 2 00:05:43.573 21:32:44 env -- scripts/common.sh@366 -- # ver2[v]=2 00:05:43.573 21:32:44 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:43.573 21:32:44 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:43.573 21:32:44 env -- scripts/common.sh@368 -- # return 0 00:05:43.573 21:32:44 env -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:43.573 21:32:44 env -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:43.573 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.573 --rc genhtml_branch_coverage=1 00:05:43.573 --rc genhtml_function_coverage=1 00:05:43.573 --rc genhtml_legend=1 00:05:43.573 --rc geninfo_all_blocks=1 00:05:43.573 --rc geninfo_unexecuted_blocks=1 00:05:43.573 00:05:43.573 ' 00:05:43.573 21:32:44 env -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:43.573 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.573 --rc genhtml_branch_coverage=1 00:05:43.573 --rc genhtml_function_coverage=1 00:05:43.573 --rc genhtml_legend=1 00:05:43.573 --rc geninfo_all_blocks=1 00:05:43.573 --rc geninfo_unexecuted_blocks=1 00:05:43.573 00:05:43.573 ' 00:05:43.573 21:32:44 env -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:43.573 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.573 --rc genhtml_branch_coverage=1 00:05:43.573 --rc genhtml_function_coverage=1 00:05:43.573 --rc genhtml_legend=1 00:05:43.573 --rc geninfo_all_blocks=1 00:05:43.573 --rc geninfo_unexecuted_blocks=1 00:05:43.573 00:05:43.573 ' 00:05:43.573 21:32:44 env -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:43.573 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.573 --rc genhtml_branch_coverage=1 00:05:43.573 --rc genhtml_function_coverage=1 00:05:43.573 --rc genhtml_legend=1 00:05:43.574 --rc geninfo_all_blocks=1 00:05:43.574 --rc geninfo_unexecuted_blocks=1 00:05:43.574 00:05:43.574 ' 00:05:43.574 21:32:44 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:43.574 21:32:44 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:43.574 21:32:44 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:43.574 21:32:44 env -- common/autotest_common.sh@10 -- # set +x 00:05:43.836 ************************************ 00:05:43.836 START TEST env_memory 00:05:43.836 ************************************ 00:05:43.837 21:32:44 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:43.837 00:05:43.837 00:05:43.837 CUnit - A unit testing framework for C - Version 2.1-3 00:05:43.837 http://cunit.sourceforge.net/ 00:05:43.837 00:05:43.837 00:05:43.837 Suite: memory 00:05:43.837 Test: alloc and free memory map ...[2024-12-10 21:32:44.439157] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:43.837 passed 00:05:43.837 Test: mem map translation ...[2024-12-10 21:32:44.486484] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:43.837 [2024-12-10 21:32:44.486607] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:43.837 [2024-12-10 21:32:44.486730] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:43.837 [2024-12-10 21:32:44.486793] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:43.837 passed 00:05:43.837 Test: mem map registration ...[2024-12-10 21:32:44.558693] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:05:43.837 [2024-12-10 21:32:44.558811] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:05:43.837 passed 00:05:44.097 Test: mem map adjacent registrations ...passed 00:05:44.097 00:05:44.097 Run Summary: Type Total Ran Passed Failed Inactive 00:05:44.097 suites 1 1 n/a 0 0 00:05:44.097 tests 4 4 4 0 0 00:05:44.097 asserts 152 152 152 0 n/a 00:05:44.097 00:05:44.097 Elapsed time = 0.263 seconds 00:05:44.097 00:05:44.097 real 0m0.324s 00:05:44.097 user 0m0.279s 00:05:44.097 sys 0m0.032s 00:05:44.097 21:32:44 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:44.097 21:32:44 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:44.097 ************************************ 00:05:44.097 END TEST env_memory 00:05:44.097 ************************************ 00:05:44.097 21:32:44 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:44.097 21:32:44 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:44.097 21:32:44 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:44.097 21:32:44 env -- common/autotest_common.sh@10 -- # set +x 00:05:44.097 ************************************ 00:05:44.097 START TEST env_vtophys 00:05:44.097 ************************************ 00:05:44.097 21:32:44 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:44.097 EAL: lib.eal log level changed from notice to debug 00:05:44.097 EAL: Detected lcore 0 as core 0 on socket 0 00:05:44.097 EAL: Detected lcore 1 as core 0 on socket 0 00:05:44.097 EAL: Detected lcore 2 as core 0 on socket 0 00:05:44.097 EAL: Detected lcore 3 as core 0 on socket 0 00:05:44.097 EAL: Detected lcore 4 as core 0 on socket 0 00:05:44.097 EAL: Detected lcore 5 as core 0 on socket 0 00:05:44.097 EAL: Detected lcore 6 as core 0 on socket 0 00:05:44.097 EAL: Detected lcore 7 as core 0 on socket 0 00:05:44.097 EAL: Detected lcore 8 as core 0 on socket 0 00:05:44.097 EAL: Detected lcore 9 as core 0 on socket 0 00:05:44.097 EAL: Maximum logical cores by configuration: 128 00:05:44.097 EAL: Detected CPU lcores: 10 00:05:44.097 EAL: Detected NUMA nodes: 1 00:05:44.097 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:05:44.097 EAL: Detected shared linkage of DPDK 00:05:44.097 EAL: No shared files mode enabled, IPC will be disabled 00:05:44.097 EAL: Selected IOVA mode 'PA' 00:05:44.097 EAL: Probing VFIO support... 00:05:44.097 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:44.097 EAL: VFIO modules not loaded, skipping VFIO support... 00:05:44.097 EAL: Ask a virtual area of 0x2e000 bytes 00:05:44.097 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:44.097 EAL: Setting up physically contiguous memory... 00:05:44.097 EAL: Setting maximum number of open files to 524288 00:05:44.097 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:44.097 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:44.097 EAL: Ask a virtual area of 0x61000 bytes 00:05:44.097 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:44.097 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:44.097 EAL: Ask a virtual area of 0x400000000 bytes 00:05:44.097 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:44.097 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:44.097 EAL: Ask a virtual area of 0x61000 bytes 00:05:44.097 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:44.097 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:44.097 EAL: Ask a virtual area of 0x400000000 bytes 00:05:44.097 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:44.097 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:44.097 EAL: Ask a virtual area of 0x61000 bytes 00:05:44.097 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:44.097 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:44.097 EAL: Ask a virtual area of 0x400000000 bytes 00:05:44.097 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:44.097 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:44.097 EAL: Ask a virtual area of 0x61000 bytes 00:05:44.097 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:44.098 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:44.098 EAL: Ask a virtual area of 0x400000000 bytes 00:05:44.098 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:44.098 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:44.098 EAL: Hugepages will be freed exactly as allocated. 00:05:44.098 EAL: No shared files mode enabled, IPC is disabled 00:05:44.098 EAL: No shared files mode enabled, IPC is disabled 00:05:44.356 EAL: TSC frequency is ~2290000 KHz 00:05:44.356 EAL: Main lcore 0 is ready (tid=7f76f7edaa40;cpuset=[0]) 00:05:44.356 EAL: Trying to obtain current memory policy. 00:05:44.356 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:44.356 EAL: Restoring previous memory policy: 0 00:05:44.356 EAL: request: mp_malloc_sync 00:05:44.356 EAL: No shared files mode enabled, IPC is disabled 00:05:44.356 EAL: Heap on socket 0 was expanded by 2MB 00:05:44.356 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:44.356 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:44.356 EAL: Mem event callback 'spdk:(nil)' registered 00:05:44.356 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:05:44.356 00:05:44.356 00:05:44.356 CUnit - A unit testing framework for C - Version 2.1-3 00:05:44.356 http://cunit.sourceforge.net/ 00:05:44.356 00:05:44.356 00:05:44.356 Suite: components_suite 00:05:44.617 Test: vtophys_malloc_test ...passed 00:05:44.617 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:44.617 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:44.617 EAL: Restoring previous memory policy: 4 00:05:44.617 EAL: Calling mem event callback 'spdk:(nil)' 00:05:44.617 EAL: request: mp_malloc_sync 00:05:44.617 EAL: No shared files mode enabled, IPC is disabled 00:05:44.617 EAL: Heap on socket 0 was expanded by 4MB 00:05:44.617 EAL: Calling mem event callback 'spdk:(nil)' 00:05:44.617 EAL: request: mp_malloc_sync 00:05:44.617 EAL: No shared files mode enabled, IPC is disabled 00:05:44.617 EAL: Heap on socket 0 was shrunk by 4MB 00:05:44.617 EAL: Trying to obtain current memory policy. 00:05:44.617 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:44.617 EAL: Restoring previous memory policy: 4 00:05:44.617 EAL: Calling mem event callback 'spdk:(nil)' 00:05:44.617 EAL: request: mp_malloc_sync 00:05:44.617 EAL: No shared files mode enabled, IPC is disabled 00:05:44.617 EAL: Heap on socket 0 was expanded by 6MB 00:05:44.617 EAL: Calling mem event callback 'spdk:(nil)' 00:05:44.617 EAL: request: mp_malloc_sync 00:05:44.617 EAL: No shared files mode enabled, IPC is disabled 00:05:44.617 EAL: Heap on socket 0 was shrunk by 6MB 00:05:44.617 EAL: Trying to obtain current memory policy. 00:05:44.617 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:44.617 EAL: Restoring previous memory policy: 4 00:05:44.617 EAL: Calling mem event callback 'spdk:(nil)' 00:05:44.617 EAL: request: mp_malloc_sync 00:05:44.617 EAL: No shared files mode enabled, IPC is disabled 00:05:44.617 EAL: Heap on socket 0 was expanded by 10MB 00:05:44.617 EAL: Calling mem event callback 'spdk:(nil)' 00:05:44.617 EAL: request: mp_malloc_sync 00:05:44.617 EAL: No shared files mode enabled, IPC is disabled 00:05:44.617 EAL: Heap on socket 0 was shrunk by 10MB 00:05:44.617 EAL: Trying to obtain current memory policy. 00:05:44.617 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:44.617 EAL: Restoring previous memory policy: 4 00:05:44.617 EAL: Calling mem event callback 'spdk:(nil)' 00:05:44.617 EAL: request: mp_malloc_sync 00:05:44.617 EAL: No shared files mode enabled, IPC is disabled 00:05:44.617 EAL: Heap on socket 0 was expanded by 18MB 00:05:44.877 EAL: Calling mem event callback 'spdk:(nil)' 00:05:44.877 EAL: request: mp_malloc_sync 00:05:44.877 EAL: No shared files mode enabled, IPC is disabled 00:05:44.877 EAL: Heap on socket 0 was shrunk by 18MB 00:05:44.877 EAL: Trying to obtain current memory policy. 00:05:44.877 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:44.877 EAL: Restoring previous memory policy: 4 00:05:44.877 EAL: Calling mem event callback 'spdk:(nil)' 00:05:44.877 EAL: request: mp_malloc_sync 00:05:44.877 EAL: No shared files mode enabled, IPC is disabled 00:05:44.877 EAL: Heap on socket 0 was expanded by 34MB 00:05:44.877 EAL: Calling mem event callback 'spdk:(nil)' 00:05:44.877 EAL: request: mp_malloc_sync 00:05:44.877 EAL: No shared files mode enabled, IPC is disabled 00:05:44.877 EAL: Heap on socket 0 was shrunk by 34MB 00:05:44.877 EAL: Trying to obtain current memory policy. 00:05:44.877 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:44.877 EAL: Restoring previous memory policy: 4 00:05:44.877 EAL: Calling mem event callback 'spdk:(nil)' 00:05:44.877 EAL: request: mp_malloc_sync 00:05:44.877 EAL: No shared files mode enabled, IPC is disabled 00:05:44.877 EAL: Heap on socket 0 was expanded by 66MB 00:05:45.135 EAL: Calling mem event callback 'spdk:(nil)' 00:05:45.135 EAL: request: mp_malloc_sync 00:05:45.136 EAL: No shared files mode enabled, IPC is disabled 00:05:45.136 EAL: Heap on socket 0 was shrunk by 66MB 00:05:45.136 EAL: Trying to obtain current memory policy. 00:05:45.136 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:45.136 EAL: Restoring previous memory policy: 4 00:05:45.136 EAL: Calling mem event callback 'spdk:(nil)' 00:05:45.136 EAL: request: mp_malloc_sync 00:05:45.136 EAL: No shared files mode enabled, IPC is disabled 00:05:45.136 EAL: Heap on socket 0 was expanded by 130MB 00:05:45.395 EAL: Calling mem event callback 'spdk:(nil)' 00:05:45.395 EAL: request: mp_malloc_sync 00:05:45.395 EAL: No shared files mode enabled, IPC is disabled 00:05:45.395 EAL: Heap on socket 0 was shrunk by 130MB 00:05:45.655 EAL: Trying to obtain current memory policy. 00:05:45.655 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:45.655 EAL: Restoring previous memory policy: 4 00:05:45.655 EAL: Calling mem event callback 'spdk:(nil)' 00:05:45.655 EAL: request: mp_malloc_sync 00:05:45.655 EAL: No shared files mode enabled, IPC is disabled 00:05:45.655 EAL: Heap on socket 0 was expanded by 258MB 00:05:46.223 EAL: Calling mem event callback 'spdk:(nil)' 00:05:46.224 EAL: request: mp_malloc_sync 00:05:46.224 EAL: No shared files mode enabled, IPC is disabled 00:05:46.224 EAL: Heap on socket 0 was shrunk by 258MB 00:05:46.793 EAL: Trying to obtain current memory policy. 00:05:46.793 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:46.793 EAL: Restoring previous memory policy: 4 00:05:46.793 EAL: Calling mem event callback 'spdk:(nil)' 00:05:46.793 EAL: request: mp_malloc_sync 00:05:46.793 EAL: No shared files mode enabled, IPC is disabled 00:05:46.793 EAL: Heap on socket 0 was expanded by 514MB 00:05:47.731 EAL: Calling mem event callback 'spdk:(nil)' 00:05:47.991 EAL: request: mp_malloc_sync 00:05:47.991 EAL: No shared files mode enabled, IPC is disabled 00:05:47.991 EAL: Heap on socket 0 was shrunk by 514MB 00:05:48.929 EAL: Trying to obtain current memory policy. 00:05:48.929 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:48.929 EAL: Restoring previous memory policy: 4 00:05:48.929 EAL: Calling mem event callback 'spdk:(nil)' 00:05:48.929 EAL: request: mp_malloc_sync 00:05:48.929 EAL: No shared files mode enabled, IPC is disabled 00:05:48.929 EAL: Heap on socket 0 was expanded by 1026MB 00:05:51.467 EAL: Calling mem event callback 'spdk:(nil)' 00:05:51.467 EAL: request: mp_malloc_sync 00:05:51.467 EAL: No shared files mode enabled, IPC is disabled 00:05:51.467 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:52.844 passed 00:05:52.844 00:05:52.844 Run Summary: Type Total Ran Passed Failed Inactive 00:05:52.844 suites 1 1 n/a 0 0 00:05:52.844 tests 2 2 2 0 0 00:05:52.844 asserts 5789 5789 5789 0 n/a 00:05:52.844 00:05:52.844 Elapsed time = 8.600 seconds 00:05:52.844 EAL: Calling mem event callback 'spdk:(nil)' 00:05:53.111 EAL: request: mp_malloc_sync 00:05:53.111 EAL: No shared files mode enabled, IPC is disabled 00:05:53.111 EAL: Heap on socket 0 was shrunk by 2MB 00:05:53.111 EAL: No shared files mode enabled, IPC is disabled 00:05:53.111 EAL: No shared files mode enabled, IPC is disabled 00:05:53.111 EAL: No shared files mode enabled, IPC is disabled 00:05:53.111 00:05:53.111 real 0m8.920s 00:05:53.111 user 0m7.928s 00:05:53.111 sys 0m0.829s 00:05:53.111 ************************************ 00:05:53.111 END TEST env_vtophys 00:05:53.111 ************************************ 00:05:53.111 21:32:53 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:53.111 21:32:53 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:53.111 21:32:53 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:53.111 21:32:53 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:53.112 21:32:53 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:53.112 21:32:53 env -- common/autotest_common.sh@10 -- # set +x 00:05:53.112 ************************************ 00:05:53.112 START TEST env_pci 00:05:53.112 ************************************ 00:05:53.112 21:32:53 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:53.112 00:05:53.112 00:05:53.112 CUnit - A unit testing framework for C - Version 2.1-3 00:05:53.112 http://cunit.sourceforge.net/ 00:05:53.112 00:05:53.112 00:05:53.112 Suite: pci 00:05:53.112 Test: pci_hook ...[2024-12-10 21:32:53.779686] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 56802 has claimed it 00:05:53.112 passed 00:05:53.112 00:05:53.112 Run Summary: Type Total Ran Passed Failed Inactive 00:05:53.112 suites 1 1 n/a 0 0 00:05:53.112 tests 1 1 1 0 0 00:05:53.112 asserts 25 25 25 0 n/a 00:05:53.112 00:05:53.112 Elapsed time = 0.006 seconds 00:05:53.112 EAL: Cannot find device (10000:00:01.0) 00:05:53.112 EAL: Failed to attach device on primary process 00:05:53.112 00:05:53.112 real 0m0.103s 00:05:53.112 user 0m0.044s 00:05:53.112 sys 0m0.058s 00:05:53.112 21:32:53 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:53.112 ************************************ 00:05:53.112 END TEST env_pci 00:05:53.112 ************************************ 00:05:53.112 21:32:53 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:53.389 21:32:53 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:53.389 21:32:53 env -- env/env.sh@15 -- # uname 00:05:53.389 21:32:53 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:53.389 21:32:53 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:53.389 21:32:53 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:53.389 21:32:53 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:05:53.389 21:32:53 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:53.389 21:32:53 env -- common/autotest_common.sh@10 -- # set +x 00:05:53.389 ************************************ 00:05:53.389 START TEST env_dpdk_post_init 00:05:53.389 ************************************ 00:05:53.389 21:32:53 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:53.389 EAL: Detected CPU lcores: 10 00:05:53.389 EAL: Detected NUMA nodes: 1 00:05:53.389 EAL: Detected shared linkage of DPDK 00:05:53.389 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:53.389 EAL: Selected IOVA mode 'PA' 00:05:53.389 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:53.389 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:05:53.389 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:05:53.663 Starting DPDK initialization... 00:05:53.663 Starting SPDK post initialization... 00:05:53.663 SPDK NVMe probe 00:05:53.663 Attaching to 0000:00:10.0 00:05:53.663 Attaching to 0000:00:11.0 00:05:53.663 Attached to 0000:00:10.0 00:05:53.663 Attached to 0000:00:11.0 00:05:53.663 Cleaning up... 00:05:53.663 00:05:53.663 real 0m0.286s 00:05:53.663 user 0m0.090s 00:05:53.663 sys 0m0.096s 00:05:53.663 21:32:54 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:53.663 21:32:54 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:53.663 ************************************ 00:05:53.663 END TEST env_dpdk_post_init 00:05:53.663 ************************************ 00:05:53.663 21:32:54 env -- env/env.sh@26 -- # uname 00:05:53.663 21:32:54 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:53.663 21:32:54 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:53.663 21:32:54 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:53.663 21:32:54 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:53.663 21:32:54 env -- common/autotest_common.sh@10 -- # set +x 00:05:53.663 ************************************ 00:05:53.663 START TEST env_mem_callbacks 00:05:53.663 ************************************ 00:05:53.663 21:32:54 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:53.663 EAL: Detected CPU lcores: 10 00:05:53.663 EAL: Detected NUMA nodes: 1 00:05:53.663 EAL: Detected shared linkage of DPDK 00:05:53.663 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:53.663 EAL: Selected IOVA mode 'PA' 00:05:53.923 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:53.923 00:05:53.923 00:05:53.923 CUnit - A unit testing framework for C - Version 2.1-3 00:05:53.923 http://cunit.sourceforge.net/ 00:05:53.923 00:05:53.923 00:05:53.923 Suite: memory 00:05:53.923 Test: test ... 00:05:53.923 register 0x200000200000 2097152 00:05:53.923 malloc 3145728 00:05:53.923 register 0x200000400000 4194304 00:05:53.923 buf 0x2000004fffc0 len 3145728 PASSED 00:05:53.923 malloc 64 00:05:53.923 buf 0x2000004ffec0 len 64 PASSED 00:05:53.923 malloc 4194304 00:05:53.923 register 0x200000800000 6291456 00:05:53.923 buf 0x2000009fffc0 len 4194304 PASSED 00:05:53.923 free 0x2000004fffc0 3145728 00:05:53.923 free 0x2000004ffec0 64 00:05:53.923 unregister 0x200000400000 4194304 PASSED 00:05:53.923 free 0x2000009fffc0 4194304 00:05:53.923 unregister 0x200000800000 6291456 PASSED 00:05:53.923 malloc 8388608 00:05:53.923 register 0x200000400000 10485760 00:05:53.923 buf 0x2000005fffc0 len 8388608 PASSED 00:05:53.923 free 0x2000005fffc0 8388608 00:05:53.923 unregister 0x200000400000 10485760 PASSED 00:05:53.923 passed 00:05:53.923 00:05:53.923 Run Summary: Type Total Ran Passed Failed Inactive 00:05:53.923 suites 1 1 n/a 0 0 00:05:53.923 tests 1 1 1 0 0 00:05:53.923 asserts 15 15 15 0 n/a 00:05:53.923 00:05:53.923 Elapsed time = 0.093 seconds 00:05:53.923 00:05:53.923 real 0m0.303s 00:05:53.923 user 0m0.121s 00:05:53.923 sys 0m0.078s 00:05:53.923 21:32:54 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:53.923 21:32:54 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:53.923 ************************************ 00:05:53.923 END TEST env_mem_callbacks 00:05:53.923 ************************************ 00:05:53.923 00:05:53.923 real 0m10.530s 00:05:53.923 user 0m8.706s 00:05:53.923 sys 0m1.457s 00:05:53.923 21:32:54 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:53.923 21:32:54 env -- common/autotest_common.sh@10 -- # set +x 00:05:53.923 ************************************ 00:05:53.923 END TEST env 00:05:53.923 ************************************ 00:05:53.923 21:32:54 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:53.923 21:32:54 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:53.923 21:32:54 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:53.923 21:32:54 -- common/autotest_common.sh@10 -- # set +x 00:05:53.923 ************************************ 00:05:53.923 START TEST rpc 00:05:53.923 ************************************ 00:05:54.183 21:32:54 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:54.183 * Looking for test storage... 00:05:54.183 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:54.183 21:32:54 rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:54.183 21:32:54 rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:54.183 21:32:54 rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:05:54.183 21:32:54 rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:54.183 21:32:54 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:54.183 21:32:54 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:54.183 21:32:54 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:54.183 21:32:54 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:54.183 21:32:54 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:54.183 21:32:54 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:54.183 21:32:54 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:54.183 21:32:54 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:54.183 21:32:54 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:54.183 21:32:54 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:54.183 21:32:54 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:54.183 21:32:54 rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:54.183 21:32:54 rpc -- scripts/common.sh@345 -- # : 1 00:05:54.183 21:32:54 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:54.183 21:32:54 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:54.183 21:32:54 rpc -- scripts/common.sh@365 -- # decimal 1 00:05:54.183 21:32:54 rpc -- scripts/common.sh@353 -- # local d=1 00:05:54.183 21:32:54 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:54.183 21:32:54 rpc -- scripts/common.sh@355 -- # echo 1 00:05:54.183 21:32:54 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:54.183 21:32:54 rpc -- scripts/common.sh@366 -- # decimal 2 00:05:54.183 21:32:54 rpc -- scripts/common.sh@353 -- # local d=2 00:05:54.183 21:32:54 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:54.183 21:32:54 rpc -- scripts/common.sh@355 -- # echo 2 00:05:54.183 21:32:54 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:54.183 21:32:54 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:54.183 21:32:54 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:54.183 21:32:54 rpc -- scripts/common.sh@368 -- # return 0 00:05:54.183 21:32:54 rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:54.183 21:32:54 rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:54.183 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.183 --rc genhtml_branch_coverage=1 00:05:54.183 --rc genhtml_function_coverage=1 00:05:54.183 --rc genhtml_legend=1 00:05:54.183 --rc geninfo_all_blocks=1 00:05:54.183 --rc geninfo_unexecuted_blocks=1 00:05:54.183 00:05:54.183 ' 00:05:54.183 21:32:54 rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:54.183 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.183 --rc genhtml_branch_coverage=1 00:05:54.183 --rc genhtml_function_coverage=1 00:05:54.183 --rc genhtml_legend=1 00:05:54.183 --rc geninfo_all_blocks=1 00:05:54.183 --rc geninfo_unexecuted_blocks=1 00:05:54.183 00:05:54.183 ' 00:05:54.183 21:32:54 rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:54.183 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.183 --rc genhtml_branch_coverage=1 00:05:54.183 --rc genhtml_function_coverage=1 00:05:54.183 --rc genhtml_legend=1 00:05:54.183 --rc geninfo_all_blocks=1 00:05:54.183 --rc geninfo_unexecuted_blocks=1 00:05:54.183 00:05:54.183 ' 00:05:54.183 21:32:54 rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:54.183 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.183 --rc genhtml_branch_coverage=1 00:05:54.183 --rc genhtml_function_coverage=1 00:05:54.183 --rc genhtml_legend=1 00:05:54.183 --rc geninfo_all_blocks=1 00:05:54.183 --rc geninfo_unexecuted_blocks=1 00:05:54.183 00:05:54.183 ' 00:05:54.183 21:32:54 rpc -- rpc/rpc.sh@65 -- # spdk_pid=56929 00:05:54.183 21:32:54 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:05:54.183 21:32:54 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:54.183 21:32:54 rpc -- rpc/rpc.sh@67 -- # waitforlisten 56929 00:05:54.183 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:54.183 21:32:54 rpc -- common/autotest_common.sh@835 -- # '[' -z 56929 ']' 00:05:54.183 21:32:54 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:54.183 21:32:54 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:54.183 21:32:54 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:54.183 21:32:54 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:54.183 21:32:54 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:54.443 [2024-12-10 21:32:55.059479] Starting SPDK v25.01-pre git sha1 cec5ba284 / DPDK 24.03.0 initialization... 00:05:54.443 [2024-12-10 21:32:55.059649] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56929 ] 00:05:54.702 [2024-12-10 21:32:55.225263] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.702 [2024-12-10 21:32:55.373632] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:54.702 [2024-12-10 21:32:55.373797] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 56929' to capture a snapshot of events at runtime. 00:05:54.702 [2024-12-10 21:32:55.373814] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:54.702 [2024-12-10 21:32:55.373826] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:54.702 [2024-12-10 21:32:55.373835] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid56929 for offline analysis/debug. 00:05:54.702 [2024-12-10 21:32:55.375437] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.640 21:32:56 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:55.640 21:32:56 rpc -- common/autotest_common.sh@868 -- # return 0 00:05:55.640 21:32:56 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:55.640 21:32:56 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:55.640 21:32:56 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:55.640 21:32:56 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:55.640 21:32:56 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:55.640 21:32:56 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:55.640 21:32:56 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:55.640 ************************************ 00:05:55.640 START TEST rpc_integrity 00:05:55.640 ************************************ 00:05:55.640 21:32:56 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:05:55.640 21:32:56 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:55.640 21:32:56 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:55.640 21:32:56 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:55.898 21:32:56 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:55.898 21:32:56 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:55.898 21:32:56 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:55.898 21:32:56 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:55.898 21:32:56 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:55.898 21:32:56 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:55.899 21:32:56 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:55.899 21:32:56 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:55.899 21:32:56 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:55.899 21:32:56 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:55.899 21:32:56 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:55.899 21:32:56 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:55.899 21:32:56 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:55.899 21:32:56 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:55.899 { 00:05:55.899 "name": "Malloc0", 00:05:55.899 "aliases": [ 00:05:55.899 "5ab7e07e-9281-4426-9644-9138b8a91caf" 00:05:55.899 ], 00:05:55.899 "product_name": "Malloc disk", 00:05:55.899 "block_size": 512, 00:05:55.899 "num_blocks": 16384, 00:05:55.899 "uuid": "5ab7e07e-9281-4426-9644-9138b8a91caf", 00:05:55.899 "assigned_rate_limits": { 00:05:55.899 "rw_ios_per_sec": 0, 00:05:55.899 "rw_mbytes_per_sec": 0, 00:05:55.899 "r_mbytes_per_sec": 0, 00:05:55.899 "w_mbytes_per_sec": 0 00:05:55.899 }, 00:05:55.899 "claimed": false, 00:05:55.899 "zoned": false, 00:05:55.899 "supported_io_types": { 00:05:55.899 "read": true, 00:05:55.899 "write": true, 00:05:55.899 "unmap": true, 00:05:55.899 "flush": true, 00:05:55.899 "reset": true, 00:05:55.899 "nvme_admin": false, 00:05:55.899 "nvme_io": false, 00:05:55.899 "nvme_io_md": false, 00:05:55.899 "write_zeroes": true, 00:05:55.899 "zcopy": true, 00:05:55.899 "get_zone_info": false, 00:05:55.899 "zone_management": false, 00:05:55.899 "zone_append": false, 00:05:55.899 "compare": false, 00:05:55.899 "compare_and_write": false, 00:05:55.899 "abort": true, 00:05:55.899 "seek_hole": false, 00:05:55.899 "seek_data": false, 00:05:55.899 "copy": true, 00:05:55.899 "nvme_iov_md": false 00:05:55.899 }, 00:05:55.899 "memory_domains": [ 00:05:55.899 { 00:05:55.899 "dma_device_id": "system", 00:05:55.899 "dma_device_type": 1 00:05:55.899 }, 00:05:55.899 { 00:05:55.899 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:55.899 "dma_device_type": 2 00:05:55.899 } 00:05:55.899 ], 00:05:55.899 "driver_specific": {} 00:05:55.899 } 00:05:55.899 ]' 00:05:55.899 21:32:56 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:55.899 21:32:56 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:55.899 21:32:56 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:55.899 21:32:56 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:55.899 21:32:56 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:55.899 [2024-12-10 21:32:56.570408] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:55.899 [2024-12-10 21:32:56.570504] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:55.899 [2024-12-10 21:32:56.570542] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:05:55.899 [2024-12-10 21:32:56.570560] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:55.899 [2024-12-10 21:32:56.573262] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:55.899 [2024-12-10 21:32:56.573316] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:55.899 Passthru0 00:05:55.899 21:32:56 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:55.899 21:32:56 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:55.899 21:32:56 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:55.899 21:32:56 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:55.899 21:32:56 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:55.899 21:32:56 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:55.899 { 00:05:55.899 "name": "Malloc0", 00:05:55.899 "aliases": [ 00:05:55.899 "5ab7e07e-9281-4426-9644-9138b8a91caf" 00:05:55.899 ], 00:05:55.899 "product_name": "Malloc disk", 00:05:55.899 "block_size": 512, 00:05:55.899 "num_blocks": 16384, 00:05:55.899 "uuid": "5ab7e07e-9281-4426-9644-9138b8a91caf", 00:05:55.899 "assigned_rate_limits": { 00:05:55.899 "rw_ios_per_sec": 0, 00:05:55.899 "rw_mbytes_per_sec": 0, 00:05:55.899 "r_mbytes_per_sec": 0, 00:05:55.899 "w_mbytes_per_sec": 0 00:05:55.899 }, 00:05:55.899 "claimed": true, 00:05:55.899 "claim_type": "exclusive_write", 00:05:55.899 "zoned": false, 00:05:55.899 "supported_io_types": { 00:05:55.899 "read": true, 00:05:55.899 "write": true, 00:05:55.899 "unmap": true, 00:05:55.899 "flush": true, 00:05:55.899 "reset": true, 00:05:55.899 "nvme_admin": false, 00:05:55.899 "nvme_io": false, 00:05:55.899 "nvme_io_md": false, 00:05:55.899 "write_zeroes": true, 00:05:55.899 "zcopy": true, 00:05:55.899 "get_zone_info": false, 00:05:55.899 "zone_management": false, 00:05:55.899 "zone_append": false, 00:05:55.899 "compare": false, 00:05:55.899 "compare_and_write": false, 00:05:55.899 "abort": true, 00:05:55.899 "seek_hole": false, 00:05:55.899 "seek_data": false, 00:05:55.899 "copy": true, 00:05:55.899 "nvme_iov_md": false 00:05:55.899 }, 00:05:55.899 "memory_domains": [ 00:05:55.899 { 00:05:55.899 "dma_device_id": "system", 00:05:55.899 "dma_device_type": 1 00:05:55.899 }, 00:05:55.899 { 00:05:55.899 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:55.899 "dma_device_type": 2 00:05:55.899 } 00:05:55.899 ], 00:05:55.899 "driver_specific": {} 00:05:55.899 }, 00:05:55.899 { 00:05:55.899 "name": "Passthru0", 00:05:55.899 "aliases": [ 00:05:55.899 "45a0f7ed-f7a7-5564-bef9-ca42e23b91ad" 00:05:55.899 ], 00:05:55.899 "product_name": "passthru", 00:05:55.899 "block_size": 512, 00:05:55.899 "num_blocks": 16384, 00:05:55.899 "uuid": "45a0f7ed-f7a7-5564-bef9-ca42e23b91ad", 00:05:55.899 "assigned_rate_limits": { 00:05:55.899 "rw_ios_per_sec": 0, 00:05:55.899 "rw_mbytes_per_sec": 0, 00:05:55.899 "r_mbytes_per_sec": 0, 00:05:55.899 "w_mbytes_per_sec": 0 00:05:55.899 }, 00:05:55.899 "claimed": false, 00:05:55.899 "zoned": false, 00:05:55.899 "supported_io_types": { 00:05:55.899 "read": true, 00:05:55.899 "write": true, 00:05:55.899 "unmap": true, 00:05:55.899 "flush": true, 00:05:55.899 "reset": true, 00:05:55.899 "nvme_admin": false, 00:05:55.899 "nvme_io": false, 00:05:55.899 "nvme_io_md": false, 00:05:55.899 "write_zeroes": true, 00:05:55.899 "zcopy": true, 00:05:55.899 "get_zone_info": false, 00:05:55.899 "zone_management": false, 00:05:55.899 "zone_append": false, 00:05:55.899 "compare": false, 00:05:55.899 "compare_and_write": false, 00:05:55.899 "abort": true, 00:05:55.899 "seek_hole": false, 00:05:55.899 "seek_data": false, 00:05:55.899 "copy": true, 00:05:55.899 "nvme_iov_md": false 00:05:55.899 }, 00:05:55.899 "memory_domains": [ 00:05:55.899 { 00:05:55.899 "dma_device_id": "system", 00:05:55.899 "dma_device_type": 1 00:05:55.899 }, 00:05:55.899 { 00:05:55.899 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:55.899 "dma_device_type": 2 00:05:55.899 } 00:05:55.899 ], 00:05:55.899 "driver_specific": { 00:05:55.899 "passthru": { 00:05:55.899 "name": "Passthru0", 00:05:55.899 "base_bdev_name": "Malloc0" 00:05:55.899 } 00:05:55.899 } 00:05:55.899 } 00:05:55.899 ]' 00:05:55.899 21:32:56 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:55.899 21:32:56 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:55.899 21:32:56 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:55.899 21:32:56 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:55.899 21:32:56 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:55.899 21:32:56 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:55.899 21:32:56 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:55.899 21:32:56 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:55.899 21:32:56 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:56.158 21:32:56 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:56.158 21:32:56 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:56.158 21:32:56 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:56.158 21:32:56 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:56.158 21:32:56 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:56.158 21:32:56 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:56.158 21:32:56 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:56.158 21:32:56 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:56.158 00:05:56.158 real 0m0.373s 00:05:56.158 ************************************ 00:05:56.158 END TEST rpc_integrity 00:05:56.158 ************************************ 00:05:56.158 user 0m0.202s 00:05:56.158 sys 0m0.056s 00:05:56.158 21:32:56 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:56.158 21:32:56 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:56.158 21:32:56 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:56.158 21:32:56 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:56.158 21:32:56 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:56.158 21:32:56 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:56.158 ************************************ 00:05:56.158 START TEST rpc_plugins 00:05:56.158 ************************************ 00:05:56.159 21:32:56 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:05:56.159 21:32:56 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:56.159 21:32:56 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:56.159 21:32:56 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:56.159 21:32:56 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:56.159 21:32:56 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:56.159 21:32:56 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:56.159 21:32:56 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:56.159 21:32:56 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:56.159 21:32:56 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:56.159 21:32:56 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:56.159 { 00:05:56.159 "name": "Malloc1", 00:05:56.159 "aliases": [ 00:05:56.159 "eb66f31a-ed96-4676-8009-15229d8976af" 00:05:56.159 ], 00:05:56.159 "product_name": "Malloc disk", 00:05:56.159 "block_size": 4096, 00:05:56.159 "num_blocks": 256, 00:05:56.159 "uuid": "eb66f31a-ed96-4676-8009-15229d8976af", 00:05:56.159 "assigned_rate_limits": { 00:05:56.159 "rw_ios_per_sec": 0, 00:05:56.159 "rw_mbytes_per_sec": 0, 00:05:56.159 "r_mbytes_per_sec": 0, 00:05:56.159 "w_mbytes_per_sec": 0 00:05:56.159 }, 00:05:56.159 "claimed": false, 00:05:56.159 "zoned": false, 00:05:56.159 "supported_io_types": { 00:05:56.159 "read": true, 00:05:56.159 "write": true, 00:05:56.159 "unmap": true, 00:05:56.159 "flush": true, 00:05:56.159 "reset": true, 00:05:56.159 "nvme_admin": false, 00:05:56.159 "nvme_io": false, 00:05:56.159 "nvme_io_md": false, 00:05:56.159 "write_zeroes": true, 00:05:56.159 "zcopy": true, 00:05:56.159 "get_zone_info": false, 00:05:56.159 "zone_management": false, 00:05:56.159 "zone_append": false, 00:05:56.159 "compare": false, 00:05:56.159 "compare_and_write": false, 00:05:56.159 "abort": true, 00:05:56.159 "seek_hole": false, 00:05:56.159 "seek_data": false, 00:05:56.159 "copy": true, 00:05:56.159 "nvme_iov_md": false 00:05:56.159 }, 00:05:56.159 "memory_domains": [ 00:05:56.159 { 00:05:56.159 "dma_device_id": "system", 00:05:56.159 "dma_device_type": 1 00:05:56.159 }, 00:05:56.159 { 00:05:56.159 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:56.159 "dma_device_type": 2 00:05:56.159 } 00:05:56.159 ], 00:05:56.159 "driver_specific": {} 00:05:56.159 } 00:05:56.159 ]' 00:05:56.159 21:32:56 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:56.159 21:32:56 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:56.159 21:32:56 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:56.159 21:32:56 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:56.159 21:32:56 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:56.449 21:32:56 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:56.449 21:32:56 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:56.449 21:32:56 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:56.449 21:32:56 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:56.449 21:32:56 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:56.449 21:32:56 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:56.449 21:32:56 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:56.449 ************************************ 00:05:56.449 END TEST rpc_plugins 00:05:56.449 ************************************ 00:05:56.449 21:32:57 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:56.449 00:05:56.449 real 0m0.172s 00:05:56.449 user 0m0.096s 00:05:56.449 sys 0m0.027s 00:05:56.449 21:32:57 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:56.449 21:32:57 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:56.449 21:32:57 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:56.449 21:32:57 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:56.449 21:32:57 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:56.449 21:32:57 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:56.449 ************************************ 00:05:56.449 START TEST rpc_trace_cmd_test 00:05:56.449 ************************************ 00:05:56.449 21:32:57 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:05:56.449 21:32:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:56.449 21:32:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:56.449 21:32:57 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:56.449 21:32:57 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:56.449 21:32:57 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:56.449 21:32:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:56.449 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid56929", 00:05:56.449 "tpoint_group_mask": "0x8", 00:05:56.449 "iscsi_conn": { 00:05:56.449 "mask": "0x2", 00:05:56.449 "tpoint_mask": "0x0" 00:05:56.449 }, 00:05:56.449 "scsi": { 00:05:56.449 "mask": "0x4", 00:05:56.449 "tpoint_mask": "0x0" 00:05:56.449 }, 00:05:56.449 "bdev": { 00:05:56.449 "mask": "0x8", 00:05:56.449 "tpoint_mask": "0xffffffffffffffff" 00:05:56.449 }, 00:05:56.449 "nvmf_rdma": { 00:05:56.449 "mask": "0x10", 00:05:56.449 "tpoint_mask": "0x0" 00:05:56.449 }, 00:05:56.449 "nvmf_tcp": { 00:05:56.449 "mask": "0x20", 00:05:56.449 "tpoint_mask": "0x0" 00:05:56.449 }, 00:05:56.449 "ftl": { 00:05:56.449 "mask": "0x40", 00:05:56.449 "tpoint_mask": "0x0" 00:05:56.449 }, 00:05:56.449 "blobfs": { 00:05:56.449 "mask": "0x80", 00:05:56.449 "tpoint_mask": "0x0" 00:05:56.449 }, 00:05:56.449 "dsa": { 00:05:56.449 "mask": "0x200", 00:05:56.449 "tpoint_mask": "0x0" 00:05:56.449 }, 00:05:56.449 "thread": { 00:05:56.449 "mask": "0x400", 00:05:56.449 "tpoint_mask": "0x0" 00:05:56.449 }, 00:05:56.449 "nvme_pcie": { 00:05:56.449 "mask": "0x800", 00:05:56.449 "tpoint_mask": "0x0" 00:05:56.449 }, 00:05:56.449 "iaa": { 00:05:56.449 "mask": "0x1000", 00:05:56.449 "tpoint_mask": "0x0" 00:05:56.449 }, 00:05:56.449 "nvme_tcp": { 00:05:56.449 "mask": "0x2000", 00:05:56.449 "tpoint_mask": "0x0" 00:05:56.449 }, 00:05:56.449 "bdev_nvme": { 00:05:56.449 "mask": "0x4000", 00:05:56.449 "tpoint_mask": "0x0" 00:05:56.449 }, 00:05:56.449 "sock": { 00:05:56.449 "mask": "0x8000", 00:05:56.449 "tpoint_mask": "0x0" 00:05:56.449 }, 00:05:56.449 "blob": { 00:05:56.449 "mask": "0x10000", 00:05:56.449 "tpoint_mask": "0x0" 00:05:56.449 }, 00:05:56.449 "bdev_raid": { 00:05:56.449 "mask": "0x20000", 00:05:56.449 "tpoint_mask": "0x0" 00:05:56.449 }, 00:05:56.449 "scheduler": { 00:05:56.449 "mask": "0x40000", 00:05:56.449 "tpoint_mask": "0x0" 00:05:56.449 } 00:05:56.449 }' 00:05:56.449 21:32:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:56.449 21:32:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:05:56.449 21:32:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:56.449 21:32:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:56.449 21:32:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:56.709 21:32:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:56.709 21:32:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:56.709 21:32:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:56.709 21:32:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:56.709 ************************************ 00:05:56.709 END TEST rpc_trace_cmd_test 00:05:56.709 ************************************ 00:05:56.709 21:32:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:56.709 00:05:56.709 real 0m0.260s 00:05:56.709 user 0m0.213s 00:05:56.709 sys 0m0.037s 00:05:56.709 21:32:57 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:56.709 21:32:57 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:56.709 21:32:57 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:56.709 21:32:57 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:56.709 21:32:57 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:56.709 21:32:57 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:56.709 21:32:57 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:56.709 21:32:57 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:56.709 ************************************ 00:05:56.709 START TEST rpc_daemon_integrity 00:05:56.709 ************************************ 00:05:56.709 21:32:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:05:56.709 21:32:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:56.709 21:32:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:56.709 21:32:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:56.709 21:32:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:56.709 21:32:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:56.709 21:32:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:56.709 21:32:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:56.709 21:32:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:56.709 21:32:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:56.709 21:32:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:56.709 21:32:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:56.709 21:32:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:56.709 21:32:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:56.709 21:32:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:56.709 21:32:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:56.969 21:32:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:56.969 21:32:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:56.969 { 00:05:56.969 "name": "Malloc2", 00:05:56.969 "aliases": [ 00:05:56.969 "8fc9698f-b4f7-4ab1-99d4-72e5527d805b" 00:05:56.969 ], 00:05:56.969 "product_name": "Malloc disk", 00:05:56.969 "block_size": 512, 00:05:56.969 "num_blocks": 16384, 00:05:56.969 "uuid": "8fc9698f-b4f7-4ab1-99d4-72e5527d805b", 00:05:56.969 "assigned_rate_limits": { 00:05:56.969 "rw_ios_per_sec": 0, 00:05:56.969 "rw_mbytes_per_sec": 0, 00:05:56.969 "r_mbytes_per_sec": 0, 00:05:56.969 "w_mbytes_per_sec": 0 00:05:56.969 }, 00:05:56.969 "claimed": false, 00:05:56.969 "zoned": false, 00:05:56.969 "supported_io_types": { 00:05:56.969 "read": true, 00:05:56.969 "write": true, 00:05:56.969 "unmap": true, 00:05:56.969 "flush": true, 00:05:56.969 "reset": true, 00:05:56.969 "nvme_admin": false, 00:05:56.969 "nvme_io": false, 00:05:56.969 "nvme_io_md": false, 00:05:56.969 "write_zeroes": true, 00:05:56.969 "zcopy": true, 00:05:56.969 "get_zone_info": false, 00:05:56.969 "zone_management": false, 00:05:56.969 "zone_append": false, 00:05:56.969 "compare": false, 00:05:56.969 "compare_and_write": false, 00:05:56.969 "abort": true, 00:05:56.969 "seek_hole": false, 00:05:56.969 "seek_data": false, 00:05:56.969 "copy": true, 00:05:56.969 "nvme_iov_md": false 00:05:56.969 }, 00:05:56.969 "memory_domains": [ 00:05:56.969 { 00:05:56.969 "dma_device_id": "system", 00:05:56.969 "dma_device_type": 1 00:05:56.969 }, 00:05:56.969 { 00:05:56.969 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:56.969 "dma_device_type": 2 00:05:56.969 } 00:05:56.969 ], 00:05:56.969 "driver_specific": {} 00:05:56.969 } 00:05:56.969 ]' 00:05:56.969 21:32:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:56.969 21:32:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:56.969 21:32:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:56.969 21:32:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:56.969 21:32:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:56.969 [2024-12-10 21:32:57.550117] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:56.969 [2024-12-10 21:32:57.550207] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:56.969 [2024-12-10 21:32:57.550233] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:05:56.969 [2024-12-10 21:32:57.550246] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:56.969 [2024-12-10 21:32:57.552917] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:56.969 [2024-12-10 21:32:57.553058] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:56.969 Passthru0 00:05:56.969 21:32:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:56.969 21:32:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:56.969 21:32:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:56.969 21:32:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:56.969 21:32:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:56.969 21:32:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:56.969 { 00:05:56.969 "name": "Malloc2", 00:05:56.969 "aliases": [ 00:05:56.969 "8fc9698f-b4f7-4ab1-99d4-72e5527d805b" 00:05:56.969 ], 00:05:56.969 "product_name": "Malloc disk", 00:05:56.969 "block_size": 512, 00:05:56.969 "num_blocks": 16384, 00:05:56.969 "uuid": "8fc9698f-b4f7-4ab1-99d4-72e5527d805b", 00:05:56.969 "assigned_rate_limits": { 00:05:56.969 "rw_ios_per_sec": 0, 00:05:56.969 "rw_mbytes_per_sec": 0, 00:05:56.969 "r_mbytes_per_sec": 0, 00:05:56.969 "w_mbytes_per_sec": 0 00:05:56.969 }, 00:05:56.969 "claimed": true, 00:05:56.969 "claim_type": "exclusive_write", 00:05:56.969 "zoned": false, 00:05:56.969 "supported_io_types": { 00:05:56.969 "read": true, 00:05:56.969 "write": true, 00:05:56.969 "unmap": true, 00:05:56.969 "flush": true, 00:05:56.969 "reset": true, 00:05:56.969 "nvme_admin": false, 00:05:56.969 "nvme_io": false, 00:05:56.969 "nvme_io_md": false, 00:05:56.969 "write_zeroes": true, 00:05:56.969 "zcopy": true, 00:05:56.969 "get_zone_info": false, 00:05:56.969 "zone_management": false, 00:05:56.969 "zone_append": false, 00:05:56.969 "compare": false, 00:05:56.969 "compare_and_write": false, 00:05:56.969 "abort": true, 00:05:56.969 "seek_hole": false, 00:05:56.969 "seek_data": false, 00:05:56.969 "copy": true, 00:05:56.969 "nvme_iov_md": false 00:05:56.969 }, 00:05:56.969 "memory_domains": [ 00:05:56.969 { 00:05:56.969 "dma_device_id": "system", 00:05:56.969 "dma_device_type": 1 00:05:56.969 }, 00:05:56.969 { 00:05:56.969 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:56.969 "dma_device_type": 2 00:05:56.969 } 00:05:56.969 ], 00:05:56.969 "driver_specific": {} 00:05:56.969 }, 00:05:56.969 { 00:05:56.969 "name": "Passthru0", 00:05:56.969 "aliases": [ 00:05:56.969 "f7c6e204-627c-5022-9b3d-13b9e92a2193" 00:05:56.969 ], 00:05:56.970 "product_name": "passthru", 00:05:56.970 "block_size": 512, 00:05:56.970 "num_blocks": 16384, 00:05:56.970 "uuid": "f7c6e204-627c-5022-9b3d-13b9e92a2193", 00:05:56.970 "assigned_rate_limits": { 00:05:56.970 "rw_ios_per_sec": 0, 00:05:56.970 "rw_mbytes_per_sec": 0, 00:05:56.970 "r_mbytes_per_sec": 0, 00:05:56.970 "w_mbytes_per_sec": 0 00:05:56.970 }, 00:05:56.970 "claimed": false, 00:05:56.970 "zoned": false, 00:05:56.970 "supported_io_types": { 00:05:56.970 "read": true, 00:05:56.970 "write": true, 00:05:56.970 "unmap": true, 00:05:56.970 "flush": true, 00:05:56.970 "reset": true, 00:05:56.970 "nvme_admin": false, 00:05:56.970 "nvme_io": false, 00:05:56.970 "nvme_io_md": false, 00:05:56.970 "write_zeroes": true, 00:05:56.970 "zcopy": true, 00:05:56.970 "get_zone_info": false, 00:05:56.970 "zone_management": false, 00:05:56.970 "zone_append": false, 00:05:56.970 "compare": false, 00:05:56.970 "compare_and_write": false, 00:05:56.970 "abort": true, 00:05:56.970 "seek_hole": false, 00:05:56.970 "seek_data": false, 00:05:56.970 "copy": true, 00:05:56.970 "nvme_iov_md": false 00:05:56.970 }, 00:05:56.970 "memory_domains": [ 00:05:56.970 { 00:05:56.970 "dma_device_id": "system", 00:05:56.970 "dma_device_type": 1 00:05:56.970 }, 00:05:56.970 { 00:05:56.970 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:56.970 "dma_device_type": 2 00:05:56.970 } 00:05:56.970 ], 00:05:56.970 "driver_specific": { 00:05:56.970 "passthru": { 00:05:56.970 "name": "Passthru0", 00:05:56.970 "base_bdev_name": "Malloc2" 00:05:56.970 } 00:05:56.970 } 00:05:56.970 } 00:05:56.970 ]' 00:05:56.970 21:32:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:56.970 21:32:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:56.970 21:32:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:56.970 21:32:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:56.970 21:32:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:56.970 21:32:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:56.970 21:32:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:56.970 21:32:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:56.970 21:32:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:56.970 21:32:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:56.970 21:32:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:56.970 21:32:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:56.970 21:32:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:56.970 21:32:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:56.970 21:32:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:56.970 21:32:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:56.970 ************************************ 00:05:56.970 END TEST rpc_daemon_integrity 00:05:56.970 ************************************ 00:05:56.970 21:32:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:56.970 00:05:56.970 real 0m0.341s 00:05:56.970 user 0m0.190s 00:05:56.970 sys 0m0.047s 00:05:56.970 21:32:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:56.970 21:32:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:57.229 21:32:57 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:57.229 21:32:57 rpc -- rpc/rpc.sh@84 -- # killprocess 56929 00:05:57.229 21:32:57 rpc -- common/autotest_common.sh@954 -- # '[' -z 56929 ']' 00:05:57.229 21:32:57 rpc -- common/autotest_common.sh@958 -- # kill -0 56929 00:05:57.229 21:32:57 rpc -- common/autotest_common.sh@959 -- # uname 00:05:57.229 21:32:57 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:57.229 21:32:57 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 56929 00:05:57.229 killing process with pid 56929 00:05:57.229 21:32:57 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:57.229 21:32:57 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:57.229 21:32:57 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 56929' 00:05:57.229 21:32:57 rpc -- common/autotest_common.sh@973 -- # kill 56929 00:05:57.229 21:32:57 rpc -- common/autotest_common.sh@978 -- # wait 56929 00:05:59.791 00:05:59.791 real 0m5.681s 00:05:59.791 user 0m6.299s 00:05:59.791 sys 0m0.906s 00:05:59.791 21:33:00 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:59.791 21:33:00 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:59.791 ************************************ 00:05:59.791 END TEST rpc 00:05:59.791 ************************************ 00:05:59.791 21:33:00 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:59.791 21:33:00 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:59.791 21:33:00 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:59.791 21:33:00 -- common/autotest_common.sh@10 -- # set +x 00:05:59.791 ************************************ 00:05:59.791 START TEST skip_rpc 00:05:59.791 ************************************ 00:05:59.791 21:33:00 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:59.791 * Looking for test storage... 00:06:00.051 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:06:00.051 21:33:00 skip_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:00.051 21:33:00 skip_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:06:00.051 21:33:00 skip_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:00.051 21:33:00 skip_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:00.051 21:33:00 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:00.051 21:33:00 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:00.051 21:33:00 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:00.051 21:33:00 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:00.051 21:33:00 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:00.051 21:33:00 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:00.051 21:33:00 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:00.051 21:33:00 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:00.051 21:33:00 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:00.051 21:33:00 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:00.051 21:33:00 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:00.051 21:33:00 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:00.051 21:33:00 skip_rpc -- scripts/common.sh@345 -- # : 1 00:06:00.051 21:33:00 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:00.051 21:33:00 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:00.051 21:33:00 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:06:00.051 21:33:00 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:06:00.051 21:33:00 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:00.051 21:33:00 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:06:00.051 21:33:00 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:00.051 21:33:00 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:06:00.051 21:33:00 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:06:00.051 21:33:00 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:00.051 21:33:00 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:06:00.051 21:33:00 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:00.051 21:33:00 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:00.051 21:33:00 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:00.051 21:33:00 skip_rpc -- scripts/common.sh@368 -- # return 0 00:06:00.051 21:33:00 skip_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:00.051 21:33:00 skip_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:00.051 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.051 --rc genhtml_branch_coverage=1 00:06:00.051 --rc genhtml_function_coverage=1 00:06:00.051 --rc genhtml_legend=1 00:06:00.051 --rc geninfo_all_blocks=1 00:06:00.051 --rc geninfo_unexecuted_blocks=1 00:06:00.051 00:06:00.051 ' 00:06:00.051 21:33:00 skip_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:00.051 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.051 --rc genhtml_branch_coverage=1 00:06:00.051 --rc genhtml_function_coverage=1 00:06:00.051 --rc genhtml_legend=1 00:06:00.051 --rc geninfo_all_blocks=1 00:06:00.051 --rc geninfo_unexecuted_blocks=1 00:06:00.051 00:06:00.051 ' 00:06:00.051 21:33:00 skip_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:00.051 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.051 --rc genhtml_branch_coverage=1 00:06:00.051 --rc genhtml_function_coverage=1 00:06:00.051 --rc genhtml_legend=1 00:06:00.051 --rc geninfo_all_blocks=1 00:06:00.051 --rc geninfo_unexecuted_blocks=1 00:06:00.051 00:06:00.051 ' 00:06:00.051 21:33:00 skip_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:00.051 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.051 --rc genhtml_branch_coverage=1 00:06:00.051 --rc genhtml_function_coverage=1 00:06:00.051 --rc genhtml_legend=1 00:06:00.051 --rc geninfo_all_blocks=1 00:06:00.051 --rc geninfo_unexecuted_blocks=1 00:06:00.051 00:06:00.051 ' 00:06:00.051 21:33:00 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:00.051 21:33:00 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:00.051 21:33:00 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:06:00.051 21:33:00 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:00.051 21:33:00 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:00.051 21:33:00 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:00.051 ************************************ 00:06:00.051 START TEST skip_rpc 00:06:00.051 ************************************ 00:06:00.051 21:33:00 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:06:00.051 21:33:00 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=57163 00:06:00.051 21:33:00 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:06:00.051 21:33:00 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:00.051 21:33:00 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:06:00.051 [2024-12-10 21:33:00.801273] Starting SPDK v25.01-pre git sha1 cec5ba284 / DPDK 24.03.0 initialization... 00:06:00.051 [2024-12-10 21:33:00.801402] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57163 ] 00:06:00.311 [2024-12-10 21:33:00.983611] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:00.570 [2024-12-10 21:33:01.103627] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.850 21:33:05 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:06:05.850 21:33:05 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:06:05.850 21:33:05 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:06:05.850 21:33:05 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:06:05.850 21:33:05 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:05.850 21:33:05 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:06:05.850 21:33:05 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:05.850 21:33:05 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:06:05.850 21:33:05 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:05.850 21:33:05 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:05.850 21:33:05 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:05.850 21:33:05 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:06:05.850 21:33:05 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:05.850 21:33:05 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:05.850 21:33:05 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:05.850 21:33:05 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:06:05.850 21:33:05 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 57163 00:06:05.850 21:33:05 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 57163 ']' 00:06:05.850 21:33:05 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 57163 00:06:05.850 21:33:05 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:06:05.850 21:33:05 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:05.850 21:33:05 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57163 00:06:05.850 killing process with pid 57163 00:06:05.850 21:33:05 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:05.850 21:33:05 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:05.850 21:33:05 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57163' 00:06:05.850 21:33:05 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 57163 00:06:05.850 21:33:05 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 57163 00:06:07.766 00:06:07.766 real 0m7.558s 00:06:07.766 user 0m7.057s 00:06:07.766 sys 0m0.406s 00:06:07.766 21:33:08 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:07.766 21:33:08 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:07.766 ************************************ 00:06:07.766 END TEST skip_rpc 00:06:07.766 ************************************ 00:06:07.766 21:33:08 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:06:07.766 21:33:08 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:07.766 21:33:08 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:07.766 21:33:08 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:07.766 ************************************ 00:06:07.766 START TEST skip_rpc_with_json 00:06:07.766 ************************************ 00:06:07.766 21:33:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:06:07.766 21:33:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:06:07.766 21:33:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=57273 00:06:07.766 21:33:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:07.766 21:33:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:07.766 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:07.766 21:33:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 57273 00:06:07.766 21:33:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 57273 ']' 00:06:07.766 21:33:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:07.767 21:33:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:07.767 21:33:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:07.767 21:33:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:07.767 21:33:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:07.767 [2024-12-10 21:33:08.424572] Starting SPDK v25.01-pre git sha1 cec5ba284 / DPDK 24.03.0 initialization... 00:06:07.767 [2024-12-10 21:33:08.424821] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57273 ] 00:06:08.026 [2024-12-10 21:33:08.598225] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.026 [2024-12-10 21:33:08.710294] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.966 21:33:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:08.966 21:33:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:06:08.966 21:33:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:06:08.966 21:33:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:08.966 21:33:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:08.966 [2024-12-10 21:33:09.558337] nvmf_rpc.c:2707:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:06:08.966 request: 00:06:08.966 { 00:06:08.966 "trtype": "tcp", 00:06:08.966 "method": "nvmf_get_transports", 00:06:08.966 "req_id": 1 00:06:08.966 } 00:06:08.966 Got JSON-RPC error response 00:06:08.966 response: 00:06:08.966 { 00:06:08.966 "code": -19, 00:06:08.966 "message": "No such device" 00:06:08.966 } 00:06:08.966 21:33:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:08.966 21:33:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:06:08.966 21:33:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:08.966 21:33:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:08.966 [2024-12-10 21:33:09.574459] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:08.966 21:33:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:08.966 21:33:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:06:08.966 21:33:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:08.966 21:33:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:09.226 21:33:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:09.226 21:33:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:09.226 { 00:06:09.226 "subsystems": [ 00:06:09.226 { 00:06:09.226 "subsystem": "fsdev", 00:06:09.226 "config": [ 00:06:09.226 { 00:06:09.226 "method": "fsdev_set_opts", 00:06:09.226 "params": { 00:06:09.226 "fsdev_io_pool_size": 65535, 00:06:09.226 "fsdev_io_cache_size": 256 00:06:09.226 } 00:06:09.226 } 00:06:09.226 ] 00:06:09.226 }, 00:06:09.226 { 00:06:09.226 "subsystem": "keyring", 00:06:09.226 "config": [] 00:06:09.226 }, 00:06:09.226 { 00:06:09.226 "subsystem": "iobuf", 00:06:09.226 "config": [ 00:06:09.226 { 00:06:09.226 "method": "iobuf_set_options", 00:06:09.226 "params": { 00:06:09.226 "small_pool_count": 8192, 00:06:09.226 "large_pool_count": 1024, 00:06:09.226 "small_bufsize": 8192, 00:06:09.226 "large_bufsize": 135168, 00:06:09.226 "enable_numa": false 00:06:09.226 } 00:06:09.226 } 00:06:09.226 ] 00:06:09.226 }, 00:06:09.226 { 00:06:09.226 "subsystem": "sock", 00:06:09.226 "config": [ 00:06:09.226 { 00:06:09.226 "method": "sock_set_default_impl", 00:06:09.226 "params": { 00:06:09.226 "impl_name": "posix" 00:06:09.226 } 00:06:09.226 }, 00:06:09.226 { 00:06:09.226 "method": "sock_impl_set_options", 00:06:09.226 "params": { 00:06:09.226 "impl_name": "ssl", 00:06:09.226 "recv_buf_size": 4096, 00:06:09.226 "send_buf_size": 4096, 00:06:09.226 "enable_recv_pipe": true, 00:06:09.226 "enable_quickack": false, 00:06:09.226 "enable_placement_id": 0, 00:06:09.226 "enable_zerocopy_send_server": true, 00:06:09.226 "enable_zerocopy_send_client": false, 00:06:09.226 "zerocopy_threshold": 0, 00:06:09.226 "tls_version": 0, 00:06:09.226 "enable_ktls": false 00:06:09.226 } 00:06:09.226 }, 00:06:09.226 { 00:06:09.226 "method": "sock_impl_set_options", 00:06:09.226 "params": { 00:06:09.226 "impl_name": "posix", 00:06:09.226 "recv_buf_size": 2097152, 00:06:09.226 "send_buf_size": 2097152, 00:06:09.226 "enable_recv_pipe": true, 00:06:09.226 "enable_quickack": false, 00:06:09.226 "enable_placement_id": 0, 00:06:09.226 "enable_zerocopy_send_server": true, 00:06:09.226 "enable_zerocopy_send_client": false, 00:06:09.226 "zerocopy_threshold": 0, 00:06:09.226 "tls_version": 0, 00:06:09.226 "enable_ktls": false 00:06:09.226 } 00:06:09.226 } 00:06:09.226 ] 00:06:09.226 }, 00:06:09.226 { 00:06:09.226 "subsystem": "vmd", 00:06:09.226 "config": [] 00:06:09.226 }, 00:06:09.226 { 00:06:09.226 "subsystem": "accel", 00:06:09.226 "config": [ 00:06:09.226 { 00:06:09.226 "method": "accel_set_options", 00:06:09.226 "params": { 00:06:09.226 "small_cache_size": 128, 00:06:09.226 "large_cache_size": 16, 00:06:09.226 "task_count": 2048, 00:06:09.226 "sequence_count": 2048, 00:06:09.226 "buf_count": 2048 00:06:09.226 } 00:06:09.226 } 00:06:09.226 ] 00:06:09.226 }, 00:06:09.226 { 00:06:09.226 "subsystem": "bdev", 00:06:09.226 "config": [ 00:06:09.226 { 00:06:09.226 "method": "bdev_set_options", 00:06:09.226 "params": { 00:06:09.226 "bdev_io_pool_size": 65535, 00:06:09.226 "bdev_io_cache_size": 256, 00:06:09.226 "bdev_auto_examine": true, 00:06:09.226 "iobuf_small_cache_size": 128, 00:06:09.226 "iobuf_large_cache_size": 16 00:06:09.226 } 00:06:09.226 }, 00:06:09.226 { 00:06:09.226 "method": "bdev_raid_set_options", 00:06:09.226 "params": { 00:06:09.226 "process_window_size_kb": 1024, 00:06:09.226 "process_max_bandwidth_mb_sec": 0 00:06:09.226 } 00:06:09.226 }, 00:06:09.226 { 00:06:09.226 "method": "bdev_iscsi_set_options", 00:06:09.226 "params": { 00:06:09.226 "timeout_sec": 30 00:06:09.226 } 00:06:09.226 }, 00:06:09.226 { 00:06:09.226 "method": "bdev_nvme_set_options", 00:06:09.226 "params": { 00:06:09.226 "action_on_timeout": "none", 00:06:09.227 "timeout_us": 0, 00:06:09.227 "timeout_admin_us": 0, 00:06:09.227 "keep_alive_timeout_ms": 10000, 00:06:09.227 "arbitration_burst": 0, 00:06:09.227 "low_priority_weight": 0, 00:06:09.227 "medium_priority_weight": 0, 00:06:09.227 "high_priority_weight": 0, 00:06:09.227 "nvme_adminq_poll_period_us": 10000, 00:06:09.227 "nvme_ioq_poll_period_us": 0, 00:06:09.227 "io_queue_requests": 0, 00:06:09.227 "delay_cmd_submit": true, 00:06:09.227 "transport_retry_count": 4, 00:06:09.227 "bdev_retry_count": 3, 00:06:09.227 "transport_ack_timeout": 0, 00:06:09.227 "ctrlr_loss_timeout_sec": 0, 00:06:09.227 "reconnect_delay_sec": 0, 00:06:09.227 "fast_io_fail_timeout_sec": 0, 00:06:09.227 "disable_auto_failback": false, 00:06:09.227 "generate_uuids": false, 00:06:09.227 "transport_tos": 0, 00:06:09.227 "nvme_error_stat": false, 00:06:09.227 "rdma_srq_size": 0, 00:06:09.227 "io_path_stat": false, 00:06:09.227 "allow_accel_sequence": false, 00:06:09.227 "rdma_max_cq_size": 0, 00:06:09.227 "rdma_cm_event_timeout_ms": 0, 00:06:09.227 "dhchap_digests": [ 00:06:09.227 "sha256", 00:06:09.227 "sha384", 00:06:09.227 "sha512" 00:06:09.227 ], 00:06:09.227 "dhchap_dhgroups": [ 00:06:09.227 "null", 00:06:09.227 "ffdhe2048", 00:06:09.227 "ffdhe3072", 00:06:09.227 "ffdhe4096", 00:06:09.227 "ffdhe6144", 00:06:09.227 "ffdhe8192" 00:06:09.227 ], 00:06:09.227 "rdma_umr_per_io": false 00:06:09.227 } 00:06:09.227 }, 00:06:09.227 { 00:06:09.227 "method": "bdev_nvme_set_hotplug", 00:06:09.227 "params": { 00:06:09.227 "period_us": 100000, 00:06:09.227 "enable": false 00:06:09.227 } 00:06:09.227 }, 00:06:09.227 { 00:06:09.227 "method": "bdev_wait_for_examine" 00:06:09.227 } 00:06:09.227 ] 00:06:09.227 }, 00:06:09.227 { 00:06:09.227 "subsystem": "scsi", 00:06:09.227 "config": null 00:06:09.227 }, 00:06:09.227 { 00:06:09.227 "subsystem": "scheduler", 00:06:09.227 "config": [ 00:06:09.227 { 00:06:09.227 "method": "framework_set_scheduler", 00:06:09.227 "params": { 00:06:09.227 "name": "static" 00:06:09.227 } 00:06:09.227 } 00:06:09.227 ] 00:06:09.227 }, 00:06:09.227 { 00:06:09.227 "subsystem": "vhost_scsi", 00:06:09.227 "config": [] 00:06:09.227 }, 00:06:09.227 { 00:06:09.227 "subsystem": "vhost_blk", 00:06:09.227 "config": [] 00:06:09.227 }, 00:06:09.227 { 00:06:09.227 "subsystem": "ublk", 00:06:09.227 "config": [] 00:06:09.227 }, 00:06:09.227 { 00:06:09.227 "subsystem": "nbd", 00:06:09.227 "config": [] 00:06:09.227 }, 00:06:09.227 { 00:06:09.227 "subsystem": "nvmf", 00:06:09.227 "config": [ 00:06:09.227 { 00:06:09.227 "method": "nvmf_set_config", 00:06:09.227 "params": { 00:06:09.227 "discovery_filter": "match_any", 00:06:09.227 "admin_cmd_passthru": { 00:06:09.227 "identify_ctrlr": false 00:06:09.227 }, 00:06:09.227 "dhchap_digests": [ 00:06:09.227 "sha256", 00:06:09.227 "sha384", 00:06:09.227 "sha512" 00:06:09.227 ], 00:06:09.227 "dhchap_dhgroups": [ 00:06:09.227 "null", 00:06:09.227 "ffdhe2048", 00:06:09.227 "ffdhe3072", 00:06:09.227 "ffdhe4096", 00:06:09.227 "ffdhe6144", 00:06:09.227 "ffdhe8192" 00:06:09.227 ] 00:06:09.227 } 00:06:09.227 }, 00:06:09.227 { 00:06:09.227 "method": "nvmf_set_max_subsystems", 00:06:09.227 "params": { 00:06:09.227 "max_subsystems": 1024 00:06:09.227 } 00:06:09.227 }, 00:06:09.227 { 00:06:09.227 "method": "nvmf_set_crdt", 00:06:09.227 "params": { 00:06:09.227 "crdt1": 0, 00:06:09.227 "crdt2": 0, 00:06:09.227 "crdt3": 0 00:06:09.227 } 00:06:09.227 }, 00:06:09.227 { 00:06:09.227 "method": "nvmf_create_transport", 00:06:09.227 "params": { 00:06:09.227 "trtype": "TCP", 00:06:09.227 "max_queue_depth": 128, 00:06:09.227 "max_io_qpairs_per_ctrlr": 127, 00:06:09.227 "in_capsule_data_size": 4096, 00:06:09.227 "max_io_size": 131072, 00:06:09.227 "io_unit_size": 131072, 00:06:09.227 "max_aq_depth": 128, 00:06:09.227 "num_shared_buffers": 511, 00:06:09.227 "buf_cache_size": 4294967295, 00:06:09.227 "dif_insert_or_strip": false, 00:06:09.227 "zcopy": false, 00:06:09.227 "c2h_success": true, 00:06:09.227 "sock_priority": 0, 00:06:09.227 "abort_timeout_sec": 1, 00:06:09.227 "ack_timeout": 0, 00:06:09.227 "data_wr_pool_size": 0 00:06:09.227 } 00:06:09.227 } 00:06:09.227 ] 00:06:09.227 }, 00:06:09.227 { 00:06:09.227 "subsystem": "iscsi", 00:06:09.227 "config": [ 00:06:09.227 { 00:06:09.227 "method": "iscsi_set_options", 00:06:09.227 "params": { 00:06:09.227 "node_base": "iqn.2016-06.io.spdk", 00:06:09.227 "max_sessions": 128, 00:06:09.227 "max_connections_per_session": 2, 00:06:09.227 "max_queue_depth": 64, 00:06:09.227 "default_time2wait": 2, 00:06:09.227 "default_time2retain": 20, 00:06:09.227 "first_burst_length": 8192, 00:06:09.227 "immediate_data": true, 00:06:09.227 "allow_duplicated_isid": false, 00:06:09.227 "error_recovery_level": 0, 00:06:09.227 "nop_timeout": 60, 00:06:09.227 "nop_in_interval": 30, 00:06:09.227 "disable_chap": false, 00:06:09.227 "require_chap": false, 00:06:09.227 "mutual_chap": false, 00:06:09.227 "chap_group": 0, 00:06:09.227 "max_large_datain_per_connection": 64, 00:06:09.227 "max_r2t_per_connection": 4, 00:06:09.227 "pdu_pool_size": 36864, 00:06:09.227 "immediate_data_pool_size": 16384, 00:06:09.227 "data_out_pool_size": 2048 00:06:09.227 } 00:06:09.227 } 00:06:09.227 ] 00:06:09.227 } 00:06:09.227 ] 00:06:09.227 } 00:06:09.227 21:33:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:06:09.227 21:33:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 57273 00:06:09.227 21:33:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57273 ']' 00:06:09.227 21:33:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57273 00:06:09.227 21:33:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:06:09.227 21:33:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:09.227 21:33:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57273 00:06:09.227 killing process with pid 57273 00:06:09.227 21:33:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:09.227 21:33:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:09.227 21:33:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57273' 00:06:09.227 21:33:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57273 00:06:09.227 21:33:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57273 00:06:11.798 21:33:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=57329 00:06:11.798 21:33:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:11.798 21:33:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:06:17.075 21:33:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 57329 00:06:17.075 21:33:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57329 ']' 00:06:17.075 21:33:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57329 00:06:17.075 21:33:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:06:17.075 21:33:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:17.075 21:33:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57329 00:06:17.075 killing process with pid 57329 00:06:17.075 21:33:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:17.075 21:33:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:17.075 21:33:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57329' 00:06:17.075 21:33:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57329 00:06:17.075 21:33:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57329 00:06:19.612 21:33:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:19.612 21:33:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:19.612 ************************************ 00:06:19.612 END TEST skip_rpc_with_json 00:06:19.612 ************************************ 00:06:19.612 00:06:19.612 real 0m11.488s 00:06:19.612 user 0m10.958s 00:06:19.612 sys 0m0.821s 00:06:19.612 21:33:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:19.612 21:33:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:19.612 21:33:19 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:06:19.612 21:33:19 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:19.612 21:33:19 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:19.612 21:33:19 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:19.612 ************************************ 00:06:19.612 START TEST skip_rpc_with_delay 00:06:19.612 ************************************ 00:06:19.612 21:33:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:06:19.612 21:33:19 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:19.612 21:33:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:06:19.612 21:33:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:19.612 21:33:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:19.612 21:33:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:19.612 21:33:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:19.612 21:33:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:19.612 21:33:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:19.612 21:33:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:19.613 21:33:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:19.613 21:33:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:06:19.613 21:33:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:19.613 [2024-12-10 21:33:19.990871] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:06:19.613 21:33:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:06:19.613 21:33:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:19.613 21:33:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:19.613 ************************************ 00:06:19.613 END TEST skip_rpc_with_delay 00:06:19.613 ************************************ 00:06:19.613 21:33:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:19.613 00:06:19.613 real 0m0.182s 00:06:19.613 user 0m0.100s 00:06:19.613 sys 0m0.078s 00:06:19.613 21:33:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:19.613 21:33:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:06:19.613 21:33:20 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:06:19.613 21:33:20 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:06:19.613 21:33:20 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:06:19.613 21:33:20 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:19.613 21:33:20 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:19.613 21:33:20 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:19.613 ************************************ 00:06:19.613 START TEST exit_on_failed_rpc_init 00:06:19.613 ************************************ 00:06:19.613 21:33:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:06:19.613 21:33:20 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=57457 00:06:19.613 21:33:20 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:19.613 21:33:20 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 57457 00:06:19.613 21:33:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 57457 ']' 00:06:19.613 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:19.613 21:33:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:19.613 21:33:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:19.613 21:33:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:19.613 21:33:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:19.613 21:33:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:19.613 [2024-12-10 21:33:20.234354] Starting SPDK v25.01-pre git sha1 cec5ba284 / DPDK 24.03.0 initialization... 00:06:19.613 [2024-12-10 21:33:20.234884] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57457 ] 00:06:19.872 [2024-12-10 21:33:20.408141] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.872 [2024-12-10 21:33:20.535527] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.809 21:33:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:20.809 21:33:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:06:20.809 21:33:21 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:20.809 21:33:21 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:20.809 21:33:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:06:20.809 21:33:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:20.810 21:33:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:20.810 21:33:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:20.810 21:33:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:20.810 21:33:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:20.810 21:33:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:20.810 21:33:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:20.810 21:33:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:20.810 21:33:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:06:20.810 21:33:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:20.810 [2024-12-10 21:33:21.559866] Starting SPDK v25.01-pre git sha1 cec5ba284 / DPDK 24.03.0 initialization... 00:06:20.810 [2024-12-10 21:33:21.560129] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57481 ] 00:06:21.068 [2024-12-10 21:33:21.738494] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.327 [2024-12-10 21:33:21.884735] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:21.327 [2024-12-10 21:33:21.884952] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:06:21.327 [2024-12-10 21:33:21.885007] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:06:21.327 [2024-12-10 21:33:21.885040] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:21.586 21:33:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:06:21.586 21:33:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:21.586 21:33:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:06:21.586 21:33:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:06:21.586 21:33:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:06:21.586 21:33:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:21.586 21:33:22 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:21.586 21:33:22 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 57457 00:06:21.586 21:33:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 57457 ']' 00:06:21.586 21:33:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 57457 00:06:21.586 21:33:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:06:21.586 21:33:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:21.586 21:33:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57457 00:06:21.586 21:33:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:21.586 21:33:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:21.586 21:33:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57457' 00:06:21.586 killing process with pid 57457 00:06:21.586 21:33:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 57457 00:06:21.586 21:33:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 57457 00:06:24.132 00:06:24.132 real 0m4.592s 00:06:24.132 user 0m4.974s 00:06:24.132 sys 0m0.588s 00:06:24.132 ************************************ 00:06:24.132 END TEST exit_on_failed_rpc_init 00:06:24.132 ************************************ 00:06:24.132 21:33:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:24.132 21:33:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:24.132 21:33:24 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:24.132 ************************************ 00:06:24.132 END TEST skip_rpc 00:06:24.132 ************************************ 00:06:24.132 00:06:24.132 real 0m24.327s 00:06:24.132 user 0m23.295s 00:06:24.132 sys 0m2.212s 00:06:24.132 21:33:24 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:24.132 21:33:24 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:24.132 21:33:24 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:24.132 21:33:24 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:24.132 21:33:24 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:24.132 21:33:24 -- common/autotest_common.sh@10 -- # set +x 00:06:24.132 ************************************ 00:06:24.132 START TEST rpc_client 00:06:24.132 ************************************ 00:06:24.132 21:33:24 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:24.398 * Looking for test storage... 00:06:24.398 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:06:24.398 21:33:24 rpc_client -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:24.398 21:33:24 rpc_client -- common/autotest_common.sh@1711 -- # lcov --version 00:06:24.399 21:33:24 rpc_client -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:24.399 21:33:25 rpc_client -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:24.399 21:33:25 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:24.399 21:33:25 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:24.399 21:33:25 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:24.399 21:33:25 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:06:24.399 21:33:25 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:06:24.399 21:33:25 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:06:24.399 21:33:25 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:06:24.399 21:33:25 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:06:24.399 21:33:25 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:06:24.399 21:33:25 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:06:24.399 21:33:25 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:24.399 21:33:25 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:06:24.399 21:33:25 rpc_client -- scripts/common.sh@345 -- # : 1 00:06:24.399 21:33:25 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:24.399 21:33:25 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:24.399 21:33:25 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:06:24.399 21:33:25 rpc_client -- scripts/common.sh@353 -- # local d=1 00:06:24.399 21:33:25 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:24.399 21:33:25 rpc_client -- scripts/common.sh@355 -- # echo 1 00:06:24.399 21:33:25 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:06:24.399 21:33:25 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:06:24.399 21:33:25 rpc_client -- scripts/common.sh@353 -- # local d=2 00:06:24.399 21:33:25 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:24.399 21:33:25 rpc_client -- scripts/common.sh@355 -- # echo 2 00:06:24.399 21:33:25 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:06:24.399 21:33:25 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:24.399 21:33:25 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:24.399 21:33:25 rpc_client -- scripts/common.sh@368 -- # return 0 00:06:24.399 21:33:25 rpc_client -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:24.399 21:33:25 rpc_client -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:24.399 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.399 --rc genhtml_branch_coverage=1 00:06:24.399 --rc genhtml_function_coverage=1 00:06:24.399 --rc genhtml_legend=1 00:06:24.399 --rc geninfo_all_blocks=1 00:06:24.399 --rc geninfo_unexecuted_blocks=1 00:06:24.399 00:06:24.399 ' 00:06:24.399 21:33:25 rpc_client -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:24.399 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.399 --rc genhtml_branch_coverage=1 00:06:24.399 --rc genhtml_function_coverage=1 00:06:24.399 --rc genhtml_legend=1 00:06:24.399 --rc geninfo_all_blocks=1 00:06:24.399 --rc geninfo_unexecuted_blocks=1 00:06:24.399 00:06:24.399 ' 00:06:24.399 21:33:25 rpc_client -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:24.399 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.399 --rc genhtml_branch_coverage=1 00:06:24.399 --rc genhtml_function_coverage=1 00:06:24.399 --rc genhtml_legend=1 00:06:24.399 --rc geninfo_all_blocks=1 00:06:24.399 --rc geninfo_unexecuted_blocks=1 00:06:24.399 00:06:24.399 ' 00:06:24.399 21:33:25 rpc_client -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:24.399 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.399 --rc genhtml_branch_coverage=1 00:06:24.399 --rc genhtml_function_coverage=1 00:06:24.399 --rc genhtml_legend=1 00:06:24.399 --rc geninfo_all_blocks=1 00:06:24.399 --rc geninfo_unexecuted_blocks=1 00:06:24.399 00:06:24.399 ' 00:06:24.399 21:33:25 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:06:24.399 OK 00:06:24.399 21:33:25 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:06:24.399 00:06:24.399 real 0m0.309s 00:06:24.399 user 0m0.174s 00:06:24.399 sys 0m0.150s 00:06:24.399 ************************************ 00:06:24.399 END TEST rpc_client 00:06:24.399 ************************************ 00:06:24.399 21:33:25 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:24.399 21:33:25 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:06:24.666 21:33:25 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:24.666 21:33:25 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:24.666 21:33:25 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:24.666 21:33:25 -- common/autotest_common.sh@10 -- # set +x 00:06:24.666 ************************************ 00:06:24.666 START TEST json_config 00:06:24.666 ************************************ 00:06:24.666 21:33:25 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:24.666 21:33:25 json_config -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:24.666 21:33:25 json_config -- common/autotest_common.sh@1711 -- # lcov --version 00:06:24.666 21:33:25 json_config -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:24.666 21:33:25 json_config -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:24.666 21:33:25 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:24.666 21:33:25 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:24.666 21:33:25 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:24.666 21:33:25 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:06:24.666 21:33:25 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:06:24.666 21:33:25 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:06:24.666 21:33:25 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:06:24.666 21:33:25 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:06:24.666 21:33:25 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:06:24.666 21:33:25 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:06:24.666 21:33:25 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:24.666 21:33:25 json_config -- scripts/common.sh@344 -- # case "$op" in 00:06:24.666 21:33:25 json_config -- scripts/common.sh@345 -- # : 1 00:06:24.666 21:33:25 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:24.667 21:33:25 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:24.667 21:33:25 json_config -- scripts/common.sh@365 -- # decimal 1 00:06:24.667 21:33:25 json_config -- scripts/common.sh@353 -- # local d=1 00:06:24.667 21:33:25 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:24.667 21:33:25 json_config -- scripts/common.sh@355 -- # echo 1 00:06:24.667 21:33:25 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:06:24.667 21:33:25 json_config -- scripts/common.sh@366 -- # decimal 2 00:06:24.667 21:33:25 json_config -- scripts/common.sh@353 -- # local d=2 00:06:24.667 21:33:25 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:24.667 21:33:25 json_config -- scripts/common.sh@355 -- # echo 2 00:06:24.667 21:33:25 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:06:24.667 21:33:25 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:24.667 21:33:25 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:24.667 21:33:25 json_config -- scripts/common.sh@368 -- # return 0 00:06:24.667 21:33:25 json_config -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:24.667 21:33:25 json_config -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:24.667 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.667 --rc genhtml_branch_coverage=1 00:06:24.667 --rc genhtml_function_coverage=1 00:06:24.667 --rc genhtml_legend=1 00:06:24.667 --rc geninfo_all_blocks=1 00:06:24.667 --rc geninfo_unexecuted_blocks=1 00:06:24.667 00:06:24.667 ' 00:06:24.667 21:33:25 json_config -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:24.667 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.667 --rc genhtml_branch_coverage=1 00:06:24.667 --rc genhtml_function_coverage=1 00:06:24.667 --rc genhtml_legend=1 00:06:24.667 --rc geninfo_all_blocks=1 00:06:24.667 --rc geninfo_unexecuted_blocks=1 00:06:24.667 00:06:24.667 ' 00:06:24.667 21:33:25 json_config -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:24.667 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.667 --rc genhtml_branch_coverage=1 00:06:24.667 --rc genhtml_function_coverage=1 00:06:24.667 --rc genhtml_legend=1 00:06:24.667 --rc geninfo_all_blocks=1 00:06:24.667 --rc geninfo_unexecuted_blocks=1 00:06:24.667 00:06:24.667 ' 00:06:24.667 21:33:25 json_config -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:24.667 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.667 --rc genhtml_branch_coverage=1 00:06:24.667 --rc genhtml_function_coverage=1 00:06:24.667 --rc genhtml_legend=1 00:06:24.667 --rc geninfo_all_blocks=1 00:06:24.667 --rc geninfo_unexecuted_blocks=1 00:06:24.667 00:06:24.667 ' 00:06:24.667 21:33:25 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:24.667 21:33:25 json_config -- nvmf/common.sh@7 -- # uname -s 00:06:24.667 21:33:25 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:24.667 21:33:25 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:24.667 21:33:25 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:24.667 21:33:25 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:24.667 21:33:25 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:24.667 21:33:25 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:24.667 21:33:25 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:24.667 21:33:25 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:24.667 21:33:25 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:24.667 21:33:25 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:24.667 21:33:25 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2e0d89c1-5aa5-4136-8af2-b7f6369ef5ad 00:06:24.667 21:33:25 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=2e0d89c1-5aa5-4136-8af2-b7f6369ef5ad 00:06:24.667 21:33:25 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:24.667 21:33:25 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:24.667 21:33:25 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:24.667 21:33:25 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:24.667 21:33:25 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:24.667 21:33:25 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:06:24.667 21:33:25 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:24.667 21:33:25 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:24.667 21:33:25 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:24.667 21:33:25 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:24.667 21:33:25 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:24.667 21:33:25 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:24.667 21:33:25 json_config -- paths/export.sh@5 -- # export PATH 00:06:24.667 21:33:25 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:24.667 21:33:25 json_config -- nvmf/common.sh@51 -- # : 0 00:06:24.667 21:33:25 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:24.667 21:33:25 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:24.667 21:33:25 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:24.667 21:33:25 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:24.667 21:33:25 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:24.667 21:33:25 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:24.667 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:24.667 21:33:25 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:24.667 21:33:25 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:24.667 21:33:25 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:24.667 21:33:25 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:06:24.667 21:33:25 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:06:24.667 21:33:25 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:06:24.667 21:33:25 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:06:24.667 21:33:25 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:06:24.667 21:33:25 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:06:24.667 WARNING: No tests are enabled so not running JSON configuration tests 00:06:24.667 21:33:25 json_config -- json_config/json_config.sh@28 -- # exit 0 00:06:24.667 00:06:24.667 real 0m0.236s 00:06:24.667 user 0m0.132s 00:06:24.667 sys 0m0.109s 00:06:24.928 21:33:25 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:24.928 21:33:25 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:24.928 ************************************ 00:06:24.928 END TEST json_config 00:06:24.928 ************************************ 00:06:24.928 21:33:25 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:24.928 21:33:25 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:24.928 21:33:25 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:24.928 21:33:25 -- common/autotest_common.sh@10 -- # set +x 00:06:24.928 ************************************ 00:06:24.928 START TEST json_config_extra_key 00:06:24.928 ************************************ 00:06:24.928 21:33:25 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:24.928 21:33:25 json_config_extra_key -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:24.928 21:33:25 json_config_extra_key -- common/autotest_common.sh@1711 -- # lcov --version 00:06:24.928 21:33:25 json_config_extra_key -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:24.928 21:33:25 json_config_extra_key -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:24.928 21:33:25 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:24.928 21:33:25 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:24.928 21:33:25 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:24.928 21:33:25 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:06:24.928 21:33:25 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:06:24.928 21:33:25 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:06:24.928 21:33:25 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:06:24.928 21:33:25 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:06:24.928 21:33:25 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:06:24.928 21:33:25 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:06:24.928 21:33:25 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:24.928 21:33:25 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:06:24.928 21:33:25 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:06:24.928 21:33:25 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:24.928 21:33:25 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:24.928 21:33:25 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:06:24.928 21:33:25 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:06:24.928 21:33:25 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:24.928 21:33:25 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:06:24.928 21:33:25 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:06:24.928 21:33:25 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:06:24.928 21:33:25 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:06:24.928 21:33:25 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:24.928 21:33:25 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:06:24.928 21:33:25 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:06:24.928 21:33:25 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:24.928 21:33:25 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:24.928 21:33:25 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:06:24.928 21:33:25 json_config_extra_key -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:24.928 21:33:25 json_config_extra_key -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:24.928 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.928 --rc genhtml_branch_coverage=1 00:06:24.928 --rc genhtml_function_coverage=1 00:06:24.928 --rc genhtml_legend=1 00:06:24.928 --rc geninfo_all_blocks=1 00:06:24.928 --rc geninfo_unexecuted_blocks=1 00:06:24.928 00:06:24.928 ' 00:06:24.928 21:33:25 json_config_extra_key -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:24.928 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.928 --rc genhtml_branch_coverage=1 00:06:24.928 --rc genhtml_function_coverage=1 00:06:24.928 --rc genhtml_legend=1 00:06:24.928 --rc geninfo_all_blocks=1 00:06:24.928 --rc geninfo_unexecuted_blocks=1 00:06:24.928 00:06:24.928 ' 00:06:24.928 21:33:25 json_config_extra_key -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:24.928 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.928 --rc genhtml_branch_coverage=1 00:06:24.928 --rc genhtml_function_coverage=1 00:06:24.928 --rc genhtml_legend=1 00:06:24.928 --rc geninfo_all_blocks=1 00:06:24.928 --rc geninfo_unexecuted_blocks=1 00:06:24.928 00:06:24.928 ' 00:06:24.928 21:33:25 json_config_extra_key -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:24.928 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.928 --rc genhtml_branch_coverage=1 00:06:24.928 --rc genhtml_function_coverage=1 00:06:24.928 --rc genhtml_legend=1 00:06:24.928 --rc geninfo_all_blocks=1 00:06:24.928 --rc geninfo_unexecuted_blocks=1 00:06:24.928 00:06:24.928 ' 00:06:24.928 21:33:25 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:24.928 21:33:25 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:06:24.928 21:33:25 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:24.928 21:33:25 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:24.928 21:33:25 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:24.928 21:33:25 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:24.928 21:33:25 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:24.928 21:33:25 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:24.928 21:33:25 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:24.928 21:33:25 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:24.928 21:33:25 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:24.928 21:33:25 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:25.189 21:33:25 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2e0d89c1-5aa5-4136-8af2-b7f6369ef5ad 00:06:25.189 21:33:25 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=2e0d89c1-5aa5-4136-8af2-b7f6369ef5ad 00:06:25.189 21:33:25 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:25.189 21:33:25 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:25.189 21:33:25 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:25.189 21:33:25 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:25.189 21:33:25 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:25.189 21:33:25 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:06:25.189 21:33:25 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:25.189 21:33:25 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:25.189 21:33:25 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:25.189 21:33:25 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:25.189 21:33:25 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:25.189 21:33:25 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:25.189 21:33:25 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:06:25.189 21:33:25 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:25.189 21:33:25 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:06:25.189 21:33:25 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:25.189 21:33:25 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:25.189 21:33:25 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:25.189 21:33:25 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:25.189 21:33:25 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:25.189 21:33:25 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:25.189 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:25.189 21:33:25 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:25.189 21:33:25 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:25.189 21:33:25 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:25.189 21:33:25 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:06:25.189 21:33:25 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:25.189 21:33:25 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:25.189 21:33:25 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:25.189 21:33:25 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:25.189 21:33:25 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:25.189 21:33:25 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:25.189 INFO: launching applications... 00:06:25.189 21:33:25 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:06:25.189 21:33:25 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:25.189 21:33:25 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:25.189 21:33:25 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:25.189 21:33:25 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:25.189 21:33:25 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:06:25.189 21:33:25 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:06:25.189 21:33:25 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:25.189 21:33:25 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:25.189 21:33:25 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:06:25.189 21:33:25 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:25.189 21:33:25 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:25.189 21:33:25 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=57691 00:06:25.189 21:33:25 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:25.189 Waiting for target to run... 00:06:25.189 21:33:25 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 57691 /var/tmp/spdk_tgt.sock 00:06:25.189 21:33:25 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:25.189 21:33:25 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 57691 ']' 00:06:25.189 21:33:25 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:25.189 21:33:25 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:25.189 21:33:25 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:25.189 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:25.189 21:33:25 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:25.189 21:33:25 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:25.189 [2024-12-10 21:33:25.835116] Starting SPDK v25.01-pre git sha1 cec5ba284 / DPDK 24.03.0 initialization... 00:06:25.189 [2024-12-10 21:33:25.835318] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57691 ] 00:06:25.758 [2024-12-10 21:33:26.241932] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.758 [2024-12-10 21:33:26.354941] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.698 21:33:27 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:26.698 21:33:27 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:06:26.698 21:33:27 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:26.698 00:06:26.698 21:33:27 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:26.698 INFO: shutting down applications... 00:06:26.698 21:33:27 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:26.698 21:33:27 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:26.698 21:33:27 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:26.698 21:33:27 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 57691 ]] 00:06:26.698 21:33:27 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 57691 00:06:26.698 21:33:27 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:26.698 21:33:27 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:26.698 21:33:27 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57691 00:06:26.698 21:33:27 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:26.958 21:33:27 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:26.958 21:33:27 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:26.958 21:33:27 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57691 00:06:26.958 21:33:27 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:27.526 21:33:28 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:27.526 21:33:28 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:27.526 21:33:28 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57691 00:06:27.526 21:33:28 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:28.095 21:33:28 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:28.095 21:33:28 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:28.095 21:33:28 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57691 00:06:28.095 21:33:28 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:28.686 21:33:29 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:28.686 21:33:29 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:28.686 21:33:29 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57691 00:06:28.686 21:33:29 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:28.945 21:33:29 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:28.945 21:33:29 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:28.945 21:33:29 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57691 00:06:28.945 21:33:29 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:29.513 21:33:30 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:29.513 21:33:30 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:29.513 21:33:30 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57691 00:06:29.513 21:33:30 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:29.513 SPDK target shutdown done 00:06:29.513 21:33:30 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:29.513 21:33:30 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:29.513 21:33:30 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:29.513 Success 00:06:29.513 21:33:30 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:29.513 00:06:29.513 real 0m4.648s 00:06:29.513 user 0m4.275s 00:06:29.513 sys 0m0.573s 00:06:29.513 ************************************ 00:06:29.513 END TEST json_config_extra_key 00:06:29.513 ************************************ 00:06:29.513 21:33:30 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:29.513 21:33:30 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:29.513 21:33:30 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:29.513 21:33:30 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:29.513 21:33:30 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:29.513 21:33:30 -- common/autotest_common.sh@10 -- # set +x 00:06:29.513 ************************************ 00:06:29.513 START TEST alias_rpc 00:06:29.513 ************************************ 00:06:29.513 21:33:30 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:29.772 * Looking for test storage... 00:06:29.772 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:06:29.772 21:33:30 alias_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:29.772 21:33:30 alias_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:06:29.772 21:33:30 alias_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:29.772 21:33:30 alias_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:29.772 21:33:30 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:29.772 21:33:30 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:29.772 21:33:30 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:29.772 21:33:30 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:29.772 21:33:30 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:29.772 21:33:30 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:29.772 21:33:30 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:29.772 21:33:30 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:29.772 21:33:30 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:29.772 21:33:30 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:29.772 21:33:30 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:29.772 21:33:30 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:29.772 21:33:30 alias_rpc -- scripts/common.sh@345 -- # : 1 00:06:29.772 21:33:30 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:29.772 21:33:30 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:29.772 21:33:30 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:06:29.772 21:33:30 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:06:29.772 21:33:30 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:29.772 21:33:30 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:06:29.772 21:33:30 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:29.772 21:33:30 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:06:29.772 21:33:30 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:06:29.772 21:33:30 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:29.772 21:33:30 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:06:29.772 21:33:30 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:29.772 21:33:30 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:29.772 21:33:30 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:29.772 21:33:30 alias_rpc -- scripts/common.sh@368 -- # return 0 00:06:29.772 21:33:30 alias_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:29.772 21:33:30 alias_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:29.772 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:29.772 --rc genhtml_branch_coverage=1 00:06:29.772 --rc genhtml_function_coverage=1 00:06:29.772 --rc genhtml_legend=1 00:06:29.772 --rc geninfo_all_blocks=1 00:06:29.772 --rc geninfo_unexecuted_blocks=1 00:06:29.772 00:06:29.772 ' 00:06:29.772 21:33:30 alias_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:29.772 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:29.772 --rc genhtml_branch_coverage=1 00:06:29.772 --rc genhtml_function_coverage=1 00:06:29.772 --rc genhtml_legend=1 00:06:29.772 --rc geninfo_all_blocks=1 00:06:29.772 --rc geninfo_unexecuted_blocks=1 00:06:29.772 00:06:29.772 ' 00:06:29.772 21:33:30 alias_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:29.772 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:29.772 --rc genhtml_branch_coverage=1 00:06:29.772 --rc genhtml_function_coverage=1 00:06:29.772 --rc genhtml_legend=1 00:06:29.772 --rc geninfo_all_blocks=1 00:06:29.772 --rc geninfo_unexecuted_blocks=1 00:06:29.772 00:06:29.772 ' 00:06:29.772 21:33:30 alias_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:29.772 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:29.772 --rc genhtml_branch_coverage=1 00:06:29.772 --rc genhtml_function_coverage=1 00:06:29.772 --rc genhtml_legend=1 00:06:29.772 --rc geninfo_all_blocks=1 00:06:29.772 --rc geninfo_unexecuted_blocks=1 00:06:29.772 00:06:29.772 ' 00:06:29.772 21:33:30 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:29.772 21:33:30 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=57802 00:06:29.772 21:33:30 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:29.772 21:33:30 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 57802 00:06:29.772 21:33:30 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 57802 ']' 00:06:29.772 21:33:30 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:29.772 21:33:30 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:29.772 21:33:30 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:29.772 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:29.772 21:33:30 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:29.773 21:33:30 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:30.032 [2024-12-10 21:33:30.565999] Starting SPDK v25.01-pre git sha1 cec5ba284 / DPDK 24.03.0 initialization... 00:06:30.032 [2024-12-10 21:33:30.566124] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57802 ] 00:06:30.032 [2024-12-10 21:33:30.741680] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.291 [2024-12-10 21:33:30.871054] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.229 21:33:31 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:31.229 21:33:31 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:31.229 21:33:31 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:06:31.488 21:33:32 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 57802 00:06:31.488 21:33:32 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 57802 ']' 00:06:31.488 21:33:32 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 57802 00:06:31.488 21:33:32 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:06:31.488 21:33:32 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:31.488 21:33:32 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57802 00:06:31.488 killing process with pid 57802 00:06:31.488 21:33:32 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:31.488 21:33:32 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:31.488 21:33:32 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57802' 00:06:31.488 21:33:32 alias_rpc -- common/autotest_common.sh@973 -- # kill 57802 00:06:31.488 21:33:32 alias_rpc -- common/autotest_common.sh@978 -- # wait 57802 00:06:34.025 00:06:34.025 real 0m4.358s 00:06:34.025 user 0m4.450s 00:06:34.025 sys 0m0.593s 00:06:34.025 21:33:34 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:34.025 21:33:34 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:34.025 ************************************ 00:06:34.025 END TEST alias_rpc 00:06:34.025 ************************************ 00:06:34.025 21:33:34 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:06:34.025 21:33:34 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:34.025 21:33:34 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:34.025 21:33:34 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:34.025 21:33:34 -- common/autotest_common.sh@10 -- # set +x 00:06:34.025 ************************************ 00:06:34.025 START TEST spdkcli_tcp 00:06:34.025 ************************************ 00:06:34.025 21:33:34 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:34.025 * Looking for test storage... 00:06:34.025 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:06:34.025 21:33:34 spdkcli_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:34.025 21:33:34 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:06:34.025 21:33:34 spdkcli_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:34.284 21:33:34 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:34.284 21:33:34 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:34.284 21:33:34 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:34.284 21:33:34 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:34.284 21:33:34 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:06:34.284 21:33:34 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:06:34.284 21:33:34 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:06:34.284 21:33:34 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:06:34.284 21:33:34 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:06:34.284 21:33:34 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:06:34.284 21:33:34 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:06:34.285 21:33:34 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:34.285 21:33:34 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:06:34.285 21:33:34 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:06:34.285 21:33:34 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:34.285 21:33:34 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:34.285 21:33:34 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:06:34.285 21:33:34 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:06:34.285 21:33:34 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:34.285 21:33:34 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:06:34.285 21:33:34 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:06:34.285 21:33:34 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:06:34.285 21:33:34 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:06:34.285 21:33:34 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:34.285 21:33:34 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:06:34.285 21:33:34 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:06:34.285 21:33:34 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:34.285 21:33:34 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:34.285 21:33:34 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:06:34.285 21:33:34 spdkcli_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:34.285 21:33:34 spdkcli_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:34.285 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:34.285 --rc genhtml_branch_coverage=1 00:06:34.285 --rc genhtml_function_coverage=1 00:06:34.285 --rc genhtml_legend=1 00:06:34.285 --rc geninfo_all_blocks=1 00:06:34.285 --rc geninfo_unexecuted_blocks=1 00:06:34.285 00:06:34.285 ' 00:06:34.285 21:33:34 spdkcli_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:34.285 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:34.285 --rc genhtml_branch_coverage=1 00:06:34.285 --rc genhtml_function_coverage=1 00:06:34.285 --rc genhtml_legend=1 00:06:34.285 --rc geninfo_all_blocks=1 00:06:34.285 --rc geninfo_unexecuted_blocks=1 00:06:34.285 00:06:34.285 ' 00:06:34.285 21:33:34 spdkcli_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:34.285 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:34.285 --rc genhtml_branch_coverage=1 00:06:34.285 --rc genhtml_function_coverage=1 00:06:34.285 --rc genhtml_legend=1 00:06:34.285 --rc geninfo_all_blocks=1 00:06:34.285 --rc geninfo_unexecuted_blocks=1 00:06:34.285 00:06:34.285 ' 00:06:34.285 21:33:34 spdkcli_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:34.285 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:34.285 --rc genhtml_branch_coverage=1 00:06:34.285 --rc genhtml_function_coverage=1 00:06:34.285 --rc genhtml_legend=1 00:06:34.285 --rc geninfo_all_blocks=1 00:06:34.285 --rc geninfo_unexecuted_blocks=1 00:06:34.285 00:06:34.285 ' 00:06:34.285 21:33:34 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:06:34.285 21:33:34 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:06:34.285 21:33:34 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:06:34.285 21:33:34 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:34.285 21:33:34 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:34.285 21:33:34 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:34.285 21:33:34 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:34.285 21:33:34 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:34.285 21:33:34 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:34.285 21:33:34 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=57915 00:06:34.285 21:33:34 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:34.285 21:33:34 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 57915 00:06:34.285 21:33:34 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 57915 ']' 00:06:34.285 21:33:34 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:34.285 21:33:34 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:34.285 21:33:34 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:34.285 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:34.285 21:33:34 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:34.285 21:33:34 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:34.285 [2024-12-10 21:33:35.017177] Starting SPDK v25.01-pre git sha1 cec5ba284 / DPDK 24.03.0 initialization... 00:06:34.285 [2024-12-10 21:33:35.017408] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57915 ] 00:06:34.543 [2024-12-10 21:33:35.196209] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:34.543 [2024-12-10 21:33:35.321524] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.543 [2024-12-10 21:33:35.321573] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:35.480 21:33:36 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:35.480 21:33:36 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:06:35.480 21:33:36 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=57932 00:06:35.480 21:33:36 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:35.480 21:33:36 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:35.740 [ 00:06:35.740 "bdev_malloc_delete", 00:06:35.740 "bdev_malloc_create", 00:06:35.740 "bdev_null_resize", 00:06:35.740 "bdev_null_delete", 00:06:35.740 "bdev_null_create", 00:06:35.740 "bdev_nvme_cuse_unregister", 00:06:35.740 "bdev_nvme_cuse_register", 00:06:35.740 "bdev_opal_new_user", 00:06:35.740 "bdev_opal_set_lock_state", 00:06:35.740 "bdev_opal_delete", 00:06:35.740 "bdev_opal_get_info", 00:06:35.740 "bdev_opal_create", 00:06:35.740 "bdev_nvme_opal_revert", 00:06:35.740 "bdev_nvme_opal_init", 00:06:35.740 "bdev_nvme_send_cmd", 00:06:35.740 "bdev_nvme_set_keys", 00:06:35.740 "bdev_nvme_get_path_iostat", 00:06:35.740 "bdev_nvme_get_mdns_discovery_info", 00:06:35.740 "bdev_nvme_stop_mdns_discovery", 00:06:35.740 "bdev_nvme_start_mdns_discovery", 00:06:35.740 "bdev_nvme_set_multipath_policy", 00:06:35.740 "bdev_nvme_set_preferred_path", 00:06:35.740 "bdev_nvme_get_io_paths", 00:06:35.740 "bdev_nvme_remove_error_injection", 00:06:35.740 "bdev_nvme_add_error_injection", 00:06:35.740 "bdev_nvme_get_discovery_info", 00:06:35.740 "bdev_nvme_stop_discovery", 00:06:35.740 "bdev_nvme_start_discovery", 00:06:35.740 "bdev_nvme_get_controller_health_info", 00:06:35.740 "bdev_nvme_disable_controller", 00:06:35.740 "bdev_nvme_enable_controller", 00:06:35.740 "bdev_nvme_reset_controller", 00:06:35.740 "bdev_nvme_get_transport_statistics", 00:06:35.740 "bdev_nvme_apply_firmware", 00:06:35.740 "bdev_nvme_detach_controller", 00:06:35.740 "bdev_nvme_get_controllers", 00:06:35.740 "bdev_nvme_attach_controller", 00:06:35.740 "bdev_nvme_set_hotplug", 00:06:35.740 "bdev_nvme_set_options", 00:06:35.740 "bdev_passthru_delete", 00:06:35.740 "bdev_passthru_create", 00:06:35.740 "bdev_lvol_set_parent_bdev", 00:06:35.740 "bdev_lvol_set_parent", 00:06:35.740 "bdev_lvol_check_shallow_copy", 00:06:35.740 "bdev_lvol_start_shallow_copy", 00:06:35.740 "bdev_lvol_grow_lvstore", 00:06:35.740 "bdev_lvol_get_lvols", 00:06:35.740 "bdev_lvol_get_lvstores", 00:06:35.740 "bdev_lvol_delete", 00:06:35.740 "bdev_lvol_set_read_only", 00:06:35.740 "bdev_lvol_resize", 00:06:35.740 "bdev_lvol_decouple_parent", 00:06:35.740 "bdev_lvol_inflate", 00:06:35.740 "bdev_lvol_rename", 00:06:35.740 "bdev_lvol_clone_bdev", 00:06:35.740 "bdev_lvol_clone", 00:06:35.740 "bdev_lvol_snapshot", 00:06:35.740 "bdev_lvol_create", 00:06:35.740 "bdev_lvol_delete_lvstore", 00:06:35.740 "bdev_lvol_rename_lvstore", 00:06:35.740 "bdev_lvol_create_lvstore", 00:06:35.740 "bdev_raid_set_options", 00:06:35.740 "bdev_raid_remove_base_bdev", 00:06:35.740 "bdev_raid_add_base_bdev", 00:06:35.740 "bdev_raid_delete", 00:06:35.740 "bdev_raid_create", 00:06:35.740 "bdev_raid_get_bdevs", 00:06:35.740 "bdev_error_inject_error", 00:06:35.740 "bdev_error_delete", 00:06:35.740 "bdev_error_create", 00:06:35.740 "bdev_split_delete", 00:06:35.740 "bdev_split_create", 00:06:35.740 "bdev_delay_delete", 00:06:35.740 "bdev_delay_create", 00:06:35.740 "bdev_delay_update_latency", 00:06:35.740 "bdev_zone_block_delete", 00:06:35.740 "bdev_zone_block_create", 00:06:35.740 "blobfs_create", 00:06:35.740 "blobfs_detect", 00:06:35.740 "blobfs_set_cache_size", 00:06:35.740 "bdev_aio_delete", 00:06:35.740 "bdev_aio_rescan", 00:06:35.740 "bdev_aio_create", 00:06:35.740 "bdev_ftl_set_property", 00:06:35.740 "bdev_ftl_get_properties", 00:06:35.740 "bdev_ftl_get_stats", 00:06:35.740 "bdev_ftl_unmap", 00:06:35.740 "bdev_ftl_unload", 00:06:35.740 "bdev_ftl_delete", 00:06:35.740 "bdev_ftl_load", 00:06:35.740 "bdev_ftl_create", 00:06:35.740 "bdev_virtio_attach_controller", 00:06:35.740 "bdev_virtio_scsi_get_devices", 00:06:35.740 "bdev_virtio_detach_controller", 00:06:35.740 "bdev_virtio_blk_set_hotplug", 00:06:35.740 "bdev_iscsi_delete", 00:06:35.740 "bdev_iscsi_create", 00:06:35.740 "bdev_iscsi_set_options", 00:06:35.740 "accel_error_inject_error", 00:06:35.740 "ioat_scan_accel_module", 00:06:35.740 "dsa_scan_accel_module", 00:06:35.740 "iaa_scan_accel_module", 00:06:35.740 "keyring_file_remove_key", 00:06:35.740 "keyring_file_add_key", 00:06:35.740 "keyring_linux_set_options", 00:06:35.740 "fsdev_aio_delete", 00:06:35.740 "fsdev_aio_create", 00:06:35.740 "iscsi_get_histogram", 00:06:35.740 "iscsi_enable_histogram", 00:06:35.740 "iscsi_set_options", 00:06:35.740 "iscsi_get_auth_groups", 00:06:35.740 "iscsi_auth_group_remove_secret", 00:06:35.740 "iscsi_auth_group_add_secret", 00:06:35.740 "iscsi_delete_auth_group", 00:06:35.740 "iscsi_create_auth_group", 00:06:35.740 "iscsi_set_discovery_auth", 00:06:35.740 "iscsi_get_options", 00:06:35.740 "iscsi_target_node_request_logout", 00:06:35.740 "iscsi_target_node_set_redirect", 00:06:35.740 "iscsi_target_node_set_auth", 00:06:35.740 "iscsi_target_node_add_lun", 00:06:35.740 "iscsi_get_stats", 00:06:35.740 "iscsi_get_connections", 00:06:35.740 "iscsi_portal_group_set_auth", 00:06:35.740 "iscsi_start_portal_group", 00:06:35.740 "iscsi_delete_portal_group", 00:06:35.740 "iscsi_create_portal_group", 00:06:35.740 "iscsi_get_portal_groups", 00:06:35.740 "iscsi_delete_target_node", 00:06:35.740 "iscsi_target_node_remove_pg_ig_maps", 00:06:35.740 "iscsi_target_node_add_pg_ig_maps", 00:06:35.740 "iscsi_create_target_node", 00:06:35.740 "iscsi_get_target_nodes", 00:06:35.740 "iscsi_delete_initiator_group", 00:06:35.740 "iscsi_initiator_group_remove_initiators", 00:06:35.740 "iscsi_initiator_group_add_initiators", 00:06:35.740 "iscsi_create_initiator_group", 00:06:35.740 "iscsi_get_initiator_groups", 00:06:35.740 "nvmf_set_crdt", 00:06:35.740 "nvmf_set_config", 00:06:35.740 "nvmf_set_max_subsystems", 00:06:35.740 "nvmf_stop_mdns_prr", 00:06:35.740 "nvmf_publish_mdns_prr", 00:06:35.740 "nvmf_subsystem_get_listeners", 00:06:35.740 "nvmf_subsystem_get_qpairs", 00:06:35.740 "nvmf_subsystem_get_controllers", 00:06:35.740 "nvmf_get_stats", 00:06:35.740 "nvmf_get_transports", 00:06:35.740 "nvmf_create_transport", 00:06:35.740 "nvmf_get_targets", 00:06:35.740 "nvmf_delete_target", 00:06:35.740 "nvmf_create_target", 00:06:35.740 "nvmf_subsystem_allow_any_host", 00:06:35.740 "nvmf_subsystem_set_keys", 00:06:35.740 "nvmf_subsystem_remove_host", 00:06:35.740 "nvmf_subsystem_add_host", 00:06:35.740 "nvmf_ns_remove_host", 00:06:35.740 "nvmf_ns_add_host", 00:06:35.740 "nvmf_subsystem_remove_ns", 00:06:35.740 "nvmf_subsystem_set_ns_ana_group", 00:06:35.740 "nvmf_subsystem_add_ns", 00:06:35.740 "nvmf_subsystem_listener_set_ana_state", 00:06:35.740 "nvmf_discovery_get_referrals", 00:06:35.740 "nvmf_discovery_remove_referral", 00:06:35.740 "nvmf_discovery_add_referral", 00:06:35.740 "nvmf_subsystem_remove_listener", 00:06:35.740 "nvmf_subsystem_add_listener", 00:06:35.740 "nvmf_delete_subsystem", 00:06:35.740 "nvmf_create_subsystem", 00:06:35.740 "nvmf_get_subsystems", 00:06:35.740 "env_dpdk_get_mem_stats", 00:06:35.740 "nbd_get_disks", 00:06:35.740 "nbd_stop_disk", 00:06:35.740 "nbd_start_disk", 00:06:35.740 "ublk_recover_disk", 00:06:35.740 "ublk_get_disks", 00:06:35.740 "ublk_stop_disk", 00:06:35.740 "ublk_start_disk", 00:06:35.740 "ublk_destroy_target", 00:06:35.740 "ublk_create_target", 00:06:35.740 "virtio_blk_create_transport", 00:06:35.740 "virtio_blk_get_transports", 00:06:35.740 "vhost_controller_set_coalescing", 00:06:35.740 "vhost_get_controllers", 00:06:35.740 "vhost_delete_controller", 00:06:35.740 "vhost_create_blk_controller", 00:06:35.740 "vhost_scsi_controller_remove_target", 00:06:35.740 "vhost_scsi_controller_add_target", 00:06:35.740 "vhost_start_scsi_controller", 00:06:35.740 "vhost_create_scsi_controller", 00:06:35.740 "thread_set_cpumask", 00:06:35.740 "scheduler_set_options", 00:06:35.740 "framework_get_governor", 00:06:35.740 "framework_get_scheduler", 00:06:35.740 "framework_set_scheduler", 00:06:35.740 "framework_get_reactors", 00:06:35.740 "thread_get_io_channels", 00:06:35.740 "thread_get_pollers", 00:06:35.740 "thread_get_stats", 00:06:35.740 "framework_monitor_context_switch", 00:06:35.740 "spdk_kill_instance", 00:06:35.740 "log_enable_timestamps", 00:06:35.740 "log_get_flags", 00:06:35.740 "log_clear_flag", 00:06:35.740 "log_set_flag", 00:06:35.740 "log_get_level", 00:06:35.740 "log_set_level", 00:06:35.740 "log_get_print_level", 00:06:35.740 "log_set_print_level", 00:06:35.740 "framework_enable_cpumask_locks", 00:06:35.740 "framework_disable_cpumask_locks", 00:06:35.740 "framework_wait_init", 00:06:35.740 "framework_start_init", 00:06:35.740 "scsi_get_devices", 00:06:35.740 "bdev_get_histogram", 00:06:35.740 "bdev_enable_histogram", 00:06:35.740 "bdev_set_qos_limit", 00:06:35.740 "bdev_set_qd_sampling_period", 00:06:35.740 "bdev_get_bdevs", 00:06:35.740 "bdev_reset_iostat", 00:06:35.740 "bdev_get_iostat", 00:06:35.740 "bdev_examine", 00:06:35.740 "bdev_wait_for_examine", 00:06:35.740 "bdev_set_options", 00:06:35.740 "accel_get_stats", 00:06:35.740 "accel_set_options", 00:06:35.740 "accel_set_driver", 00:06:35.740 "accel_crypto_key_destroy", 00:06:35.740 "accel_crypto_keys_get", 00:06:35.741 "accel_crypto_key_create", 00:06:35.741 "accel_assign_opc", 00:06:35.741 "accel_get_module_info", 00:06:35.741 "accel_get_opc_assignments", 00:06:35.741 "vmd_rescan", 00:06:35.741 "vmd_remove_device", 00:06:35.741 "vmd_enable", 00:06:35.741 "sock_get_default_impl", 00:06:35.741 "sock_set_default_impl", 00:06:35.741 "sock_impl_set_options", 00:06:35.741 "sock_impl_get_options", 00:06:35.741 "iobuf_get_stats", 00:06:35.741 "iobuf_set_options", 00:06:35.741 "keyring_get_keys", 00:06:35.741 "framework_get_pci_devices", 00:06:35.741 "framework_get_config", 00:06:35.741 "framework_get_subsystems", 00:06:35.741 "fsdev_set_opts", 00:06:35.741 "fsdev_get_opts", 00:06:35.741 "trace_get_info", 00:06:35.741 "trace_get_tpoint_group_mask", 00:06:35.741 "trace_disable_tpoint_group", 00:06:35.741 "trace_enable_tpoint_group", 00:06:35.741 "trace_clear_tpoint_mask", 00:06:35.741 "trace_set_tpoint_mask", 00:06:35.741 "notify_get_notifications", 00:06:35.741 "notify_get_types", 00:06:35.741 "spdk_get_version", 00:06:35.741 "rpc_get_methods" 00:06:35.741 ] 00:06:35.741 21:33:36 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:35.741 21:33:36 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:35.741 21:33:36 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:36.000 21:33:36 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:36.000 21:33:36 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 57915 00:06:36.000 21:33:36 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 57915 ']' 00:06:36.000 21:33:36 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 57915 00:06:36.000 21:33:36 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:06:36.000 21:33:36 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:36.000 21:33:36 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57915 00:06:36.000 killing process with pid 57915 00:06:36.000 21:33:36 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:36.000 21:33:36 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:36.000 21:33:36 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57915' 00:06:36.000 21:33:36 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 57915 00:06:36.000 21:33:36 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 57915 00:06:38.536 ************************************ 00:06:38.536 END TEST spdkcli_tcp 00:06:38.536 ************************************ 00:06:38.536 00:06:38.536 real 0m4.491s 00:06:38.536 user 0m8.032s 00:06:38.536 sys 0m0.652s 00:06:38.536 21:33:39 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:38.536 21:33:39 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:38.536 21:33:39 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:38.536 21:33:39 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:38.536 21:33:39 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:38.536 21:33:39 -- common/autotest_common.sh@10 -- # set +x 00:06:38.536 ************************************ 00:06:38.536 START TEST dpdk_mem_utility 00:06:38.536 ************************************ 00:06:38.536 21:33:39 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:38.796 * Looking for test storage... 00:06:38.796 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:06:38.796 21:33:39 dpdk_mem_utility -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:38.796 21:33:39 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lcov --version 00:06:38.796 21:33:39 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:38.796 21:33:39 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:38.796 21:33:39 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:38.796 21:33:39 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:38.796 21:33:39 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:38.796 21:33:39 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:06:38.796 21:33:39 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:06:38.796 21:33:39 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:06:38.796 21:33:39 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:06:38.796 21:33:39 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:06:38.796 21:33:39 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:06:38.796 21:33:39 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:06:38.796 21:33:39 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:38.796 21:33:39 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:06:38.796 21:33:39 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:06:38.796 21:33:39 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:38.796 21:33:39 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:38.796 21:33:39 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:06:38.796 21:33:39 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:06:38.796 21:33:39 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:38.796 21:33:39 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:06:38.796 21:33:39 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:06:38.796 21:33:39 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:06:38.796 21:33:39 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:06:38.796 21:33:39 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:38.796 21:33:39 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:06:38.796 21:33:39 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:06:38.796 21:33:39 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:38.796 21:33:39 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:38.796 21:33:39 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:06:38.796 21:33:39 dpdk_mem_utility -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:38.796 21:33:39 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:38.796 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.796 --rc genhtml_branch_coverage=1 00:06:38.796 --rc genhtml_function_coverage=1 00:06:38.796 --rc genhtml_legend=1 00:06:38.796 --rc geninfo_all_blocks=1 00:06:38.796 --rc geninfo_unexecuted_blocks=1 00:06:38.796 00:06:38.796 ' 00:06:38.796 21:33:39 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:38.796 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.796 --rc genhtml_branch_coverage=1 00:06:38.796 --rc genhtml_function_coverage=1 00:06:38.796 --rc genhtml_legend=1 00:06:38.796 --rc geninfo_all_blocks=1 00:06:38.796 --rc geninfo_unexecuted_blocks=1 00:06:38.796 00:06:38.796 ' 00:06:38.796 21:33:39 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:38.796 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.796 --rc genhtml_branch_coverage=1 00:06:38.796 --rc genhtml_function_coverage=1 00:06:38.796 --rc genhtml_legend=1 00:06:38.796 --rc geninfo_all_blocks=1 00:06:38.796 --rc geninfo_unexecuted_blocks=1 00:06:38.796 00:06:38.796 ' 00:06:38.796 21:33:39 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:38.796 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.796 --rc genhtml_branch_coverage=1 00:06:38.796 --rc genhtml_function_coverage=1 00:06:38.796 --rc genhtml_legend=1 00:06:38.796 --rc geninfo_all_blocks=1 00:06:38.796 --rc geninfo_unexecuted_blocks=1 00:06:38.796 00:06:38.796 ' 00:06:38.796 21:33:39 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:38.796 21:33:39 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=58037 00:06:38.796 21:33:39 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:38.796 21:33:39 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 58037 00:06:38.796 21:33:39 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 58037 ']' 00:06:38.796 21:33:39 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:38.796 21:33:39 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:38.796 21:33:39 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:38.796 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:38.796 21:33:39 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:38.796 21:33:39 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:38.796 [2024-12-10 21:33:39.547138] Starting SPDK v25.01-pre git sha1 cec5ba284 / DPDK 24.03.0 initialization... 00:06:38.796 [2024-12-10 21:33:39.547371] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58037 ] 00:06:39.055 [2024-12-10 21:33:39.721870] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.313 [2024-12-10 21:33:39.843319] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.292 21:33:40 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:40.292 21:33:40 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:06:40.293 21:33:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:40.293 21:33:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:40.293 21:33:40 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:40.293 21:33:40 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:40.293 { 00:06:40.293 "filename": "/tmp/spdk_mem_dump.txt" 00:06:40.293 } 00:06:40.293 21:33:40 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:40.293 21:33:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:40.293 DPDK memory size 824.000000 MiB in 1 heap(s) 00:06:40.293 1 heaps totaling size 824.000000 MiB 00:06:40.293 size: 824.000000 MiB heap id: 0 00:06:40.293 end heaps---------- 00:06:40.293 9 mempools totaling size 603.782043 MiB 00:06:40.293 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:40.293 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:40.293 size: 100.555481 MiB name: bdev_io_58037 00:06:40.293 size: 50.003479 MiB name: msgpool_58037 00:06:40.293 size: 36.509338 MiB name: fsdev_io_58037 00:06:40.293 size: 21.763794 MiB name: PDU_Pool 00:06:40.293 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:40.293 size: 4.133484 MiB name: evtpool_58037 00:06:40.293 size: 0.026123 MiB name: Session_Pool 00:06:40.293 end mempools------- 00:06:40.293 6 memzones totaling size 4.142822 MiB 00:06:40.293 size: 1.000366 MiB name: RG_ring_0_58037 00:06:40.293 size: 1.000366 MiB name: RG_ring_1_58037 00:06:40.293 size: 1.000366 MiB name: RG_ring_4_58037 00:06:40.293 size: 1.000366 MiB name: RG_ring_5_58037 00:06:40.293 size: 0.125366 MiB name: RG_ring_2_58037 00:06:40.293 size: 0.015991 MiB name: RG_ring_3_58037 00:06:40.293 end memzones------- 00:06:40.293 21:33:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:06:40.293 heap id: 0 total size: 824.000000 MiB number of busy elements: 322 number of free elements: 18 00:06:40.293 list of free elements. size: 16.779663 MiB 00:06:40.293 element at address: 0x200006400000 with size: 1.995972 MiB 00:06:40.293 element at address: 0x20000a600000 with size: 1.995972 MiB 00:06:40.293 element at address: 0x200003e00000 with size: 1.991028 MiB 00:06:40.293 element at address: 0x200019500040 with size: 0.999939 MiB 00:06:40.293 element at address: 0x200019900040 with size: 0.999939 MiB 00:06:40.293 element at address: 0x200019a00000 with size: 0.999084 MiB 00:06:40.293 element at address: 0x200032600000 with size: 0.994324 MiB 00:06:40.293 element at address: 0x200000400000 with size: 0.992004 MiB 00:06:40.293 element at address: 0x200019200000 with size: 0.959656 MiB 00:06:40.293 element at address: 0x200019d00040 with size: 0.936401 MiB 00:06:40.293 element at address: 0x200000200000 with size: 0.716980 MiB 00:06:40.293 element at address: 0x20001b400000 with size: 0.561218 MiB 00:06:40.293 element at address: 0x200000c00000 with size: 0.489197 MiB 00:06:40.293 element at address: 0x200019600000 with size: 0.487976 MiB 00:06:40.293 element at address: 0x200019e00000 with size: 0.485413 MiB 00:06:40.293 element at address: 0x200012c00000 with size: 0.433228 MiB 00:06:40.293 element at address: 0x200028800000 with size: 0.390442 MiB 00:06:40.293 element at address: 0x200000800000 with size: 0.350891 MiB 00:06:40.293 list of standard malloc elements. size: 199.289429 MiB 00:06:40.293 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:06:40.293 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:06:40.293 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:06:40.293 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:06:40.293 element at address: 0x200019bfff80 with size: 1.000183 MiB 00:06:40.293 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:06:40.293 element at address: 0x200019deff40 with size: 0.062683 MiB 00:06:40.293 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:06:40.293 element at address: 0x20000a5ff040 with size: 0.000427 MiB 00:06:40.293 element at address: 0x200019defdc0 with size: 0.000366 MiB 00:06:40.293 element at address: 0x200012bff040 with size: 0.000305 MiB 00:06:40.293 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:06:40.293 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:06:40.293 element at address: 0x2000004fdf40 with size: 0.000244 MiB 00:06:40.293 element at address: 0x2000004fe040 with size: 0.000244 MiB 00:06:40.293 element at address: 0x2000004fe140 with size: 0.000244 MiB 00:06:40.293 element at address: 0x2000004fe240 with size: 0.000244 MiB 00:06:40.293 element at address: 0x2000004fe340 with size: 0.000244 MiB 00:06:40.293 element at address: 0x2000004fe440 with size: 0.000244 MiB 00:06:40.293 element at address: 0x2000004fe540 with size: 0.000244 MiB 00:06:40.293 element at address: 0x2000004fe640 with size: 0.000244 MiB 00:06:40.293 element at address: 0x2000004fe740 with size: 0.000244 MiB 00:06:40.293 element at address: 0x2000004fe840 with size: 0.000244 MiB 00:06:40.293 element at address: 0x2000004fe940 with size: 0.000244 MiB 00:06:40.293 element at address: 0x2000004fea40 with size: 0.000244 MiB 00:06:40.293 element at address: 0x2000004feb40 with size: 0.000244 MiB 00:06:40.293 element at address: 0x2000004fec40 with size: 0.000244 MiB 00:06:40.293 element at address: 0x2000004fed40 with size: 0.000244 MiB 00:06:40.293 element at address: 0x2000004fee40 with size: 0.000244 MiB 00:06:40.293 element at address: 0x2000004fef40 with size: 0.000244 MiB 00:06:40.293 element at address: 0x2000004ff040 with size: 0.000244 MiB 00:06:40.293 element at address: 0x2000004ff140 with size: 0.000244 MiB 00:06:40.293 element at address: 0x2000004ff240 with size: 0.000244 MiB 00:06:40.293 element at address: 0x2000004ff340 with size: 0.000244 MiB 00:06:40.293 element at address: 0x2000004ff440 with size: 0.000244 MiB 00:06:40.293 element at address: 0x2000004ff540 with size: 0.000244 MiB 00:06:40.293 element at address: 0x2000004ff640 with size: 0.000244 MiB 00:06:40.293 element at address: 0x2000004ff740 with size: 0.000244 MiB 00:06:40.293 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:06:40.293 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:06:40.293 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 00:06:40.293 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:06:40.293 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:06:40.293 element at address: 0x20000087e1c0 with size: 0.000244 MiB 00:06:40.293 element at address: 0x20000087e2c0 with size: 0.000244 MiB 00:06:40.293 element at address: 0x20000087e3c0 with size: 0.000244 MiB 00:06:40.293 element at address: 0x20000087e4c0 with size: 0.000244 MiB 00:06:40.293 element at address: 0x20000087e5c0 with size: 0.000244 MiB 00:06:40.293 element at address: 0x20000087e6c0 with size: 0.000244 MiB 00:06:40.293 element at address: 0x20000087e7c0 with size: 0.000244 MiB 00:06:40.293 element at address: 0x20000087e8c0 with size: 0.000244 MiB 00:06:40.293 element at address: 0x20000087e9c0 with size: 0.000244 MiB 00:06:40.293 element at address: 0x20000087eac0 with size: 0.000244 MiB 00:06:40.293 element at address: 0x20000087ebc0 with size: 0.000244 MiB 00:06:40.293 element at address: 0x20000087ecc0 with size: 0.000244 MiB 00:06:40.293 element at address: 0x20000087edc0 with size: 0.000244 MiB 00:06:40.293 element at address: 0x20000087eec0 with size: 0.000244 MiB 00:06:40.293 element at address: 0x20000087efc0 with size: 0.000244 MiB 00:06:40.293 element at address: 0x20000087f0c0 with size: 0.000244 MiB 00:06:40.293 element at address: 0x20000087f1c0 with size: 0.000244 MiB 00:06:40.293 element at address: 0x20000087f2c0 with size: 0.000244 MiB 00:06:40.293 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:06:40.293 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:06:40.293 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:06:40.293 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:06:40.293 element at address: 0x200000c7d3c0 with size: 0.000244 MiB 00:06:40.293 element at address: 0x200000c7d4c0 with size: 0.000244 MiB 00:06:40.293 element at address: 0x200000c7d5c0 with size: 0.000244 MiB 00:06:40.293 element at address: 0x200000c7d6c0 with size: 0.000244 MiB 00:06:40.293 element at address: 0x200000c7d7c0 with size: 0.000244 MiB 00:06:40.293 element at address: 0x200000c7d8c0 with size: 0.000244 MiB 00:06:40.293 element at address: 0x200000c7d9c0 with size: 0.000244 MiB 00:06:40.293 element at address: 0x200000c7dac0 with size: 0.000244 MiB 00:06:40.293 element at address: 0x200000c7dbc0 with size: 0.000244 MiB 00:06:40.293 element at address: 0x200000c7dcc0 with size: 0.000244 MiB 00:06:40.293 element at address: 0x200000c7ddc0 with size: 0.000244 MiB 00:06:40.293 element at address: 0x200000c7dec0 with size: 0.000244 MiB 00:06:40.293 element at address: 0x200000c7dfc0 with size: 0.000244 MiB 00:06:40.293 element at address: 0x200000c7e0c0 with size: 0.000244 MiB 00:06:40.293 element at address: 0x200000c7e1c0 with size: 0.000244 MiB 00:06:40.293 element at address: 0x200000c7e2c0 with size: 0.000244 MiB 00:06:40.293 element at address: 0x200000c7e3c0 with size: 0.000244 MiB 00:06:40.293 element at address: 0x200000c7e4c0 with size: 0.000244 MiB 00:06:40.293 element at address: 0x200000c7e5c0 with size: 0.000244 MiB 00:06:40.293 element at address: 0x200000c7e6c0 with size: 0.000244 MiB 00:06:40.293 element at address: 0x200000c7e7c0 with size: 0.000244 MiB 00:06:40.293 element at address: 0x200000c7e8c0 with size: 0.000244 MiB 00:06:40.293 element at address: 0x200000c7e9c0 with size: 0.000244 MiB 00:06:40.293 element at address: 0x200000c7eac0 with size: 0.000244 MiB 00:06:40.293 element at address: 0x200000c7ebc0 with size: 0.000244 MiB 00:06:40.293 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:06:40.293 element at address: 0x200000cff000 with size: 0.000244 MiB 00:06:40.293 element at address: 0x20000a5ff200 with size: 0.000244 MiB 00:06:40.293 element at address: 0x20000a5ff300 with size: 0.000244 MiB 00:06:40.293 element at address: 0x20000a5ff400 with size: 0.000244 MiB 00:06:40.293 element at address: 0x20000a5ff500 with size: 0.000244 MiB 00:06:40.293 element at address: 0x20000a5ff600 with size: 0.000244 MiB 00:06:40.293 element at address: 0x20000a5ff700 with size: 0.000244 MiB 00:06:40.293 element at address: 0x20000a5ff800 with size: 0.000244 MiB 00:06:40.293 element at address: 0x20000a5ff900 with size: 0.000244 MiB 00:06:40.293 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 00:06:40.293 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 00:06:40.293 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:06:40.293 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:06:40.293 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:06:40.293 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:06:40.293 element at address: 0x200012bff180 with size: 0.000244 MiB 00:06:40.293 element at address: 0x200012bff280 with size: 0.000244 MiB 00:06:40.294 element at address: 0x200012bff380 with size: 0.000244 MiB 00:06:40.294 element at address: 0x200012bff480 with size: 0.000244 MiB 00:06:40.294 element at address: 0x200012bff580 with size: 0.000244 MiB 00:06:40.294 element at address: 0x200012bff680 with size: 0.000244 MiB 00:06:40.294 element at address: 0x200012bff780 with size: 0.000244 MiB 00:06:40.294 element at address: 0x200012bff880 with size: 0.000244 MiB 00:06:40.294 element at address: 0x200012bff980 with size: 0.000244 MiB 00:06:40.294 element at address: 0x200012bffa80 with size: 0.000244 MiB 00:06:40.294 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:06:40.294 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:06:40.294 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:06:40.294 element at address: 0x200012c6ee80 with size: 0.000244 MiB 00:06:40.294 element at address: 0x200012c6ef80 with size: 0.000244 MiB 00:06:40.294 element at address: 0x200012c6f080 with size: 0.000244 MiB 00:06:40.294 element at address: 0x200012c6f180 with size: 0.000244 MiB 00:06:40.294 element at address: 0x200012c6f280 with size: 0.000244 MiB 00:06:40.294 element at address: 0x200012c6f380 with size: 0.000244 MiB 00:06:40.294 element at address: 0x200012c6f480 with size: 0.000244 MiB 00:06:40.294 element at address: 0x200012c6f580 with size: 0.000244 MiB 00:06:40.294 element at address: 0x200012c6f680 with size: 0.000244 MiB 00:06:40.294 element at address: 0x200012c6f780 with size: 0.000244 MiB 00:06:40.294 element at address: 0x200012c6f880 with size: 0.000244 MiB 00:06:40.294 element at address: 0x200012cefbc0 with size: 0.000244 MiB 00:06:40.294 element at address: 0x2000192fdd00 with size: 0.000244 MiB 00:06:40.294 element at address: 0x20001967cec0 with size: 0.000244 MiB 00:06:40.294 element at address: 0x20001967cfc0 with size: 0.000244 MiB 00:06:40.294 element at address: 0x20001967d0c0 with size: 0.000244 MiB 00:06:40.294 element at address: 0x20001967d1c0 with size: 0.000244 MiB 00:06:40.294 element at address: 0x20001967d2c0 with size: 0.000244 MiB 00:06:40.294 element at address: 0x20001967d3c0 with size: 0.000244 MiB 00:06:40.294 element at address: 0x20001967d4c0 with size: 0.000244 MiB 00:06:40.294 element at address: 0x20001967d5c0 with size: 0.000244 MiB 00:06:40.294 element at address: 0x20001967d6c0 with size: 0.000244 MiB 00:06:40.294 element at address: 0x20001967d7c0 with size: 0.000244 MiB 00:06:40.294 element at address: 0x20001967d8c0 with size: 0.000244 MiB 00:06:40.294 element at address: 0x20001967d9c0 with size: 0.000244 MiB 00:06:40.294 element at address: 0x2000196fdd00 with size: 0.000244 MiB 00:06:40.294 element at address: 0x200019affc40 with size: 0.000244 MiB 00:06:40.294 element at address: 0x200019defbc0 with size: 0.000244 MiB 00:06:40.294 element at address: 0x200019defcc0 with size: 0.000244 MiB 00:06:40.294 element at address: 0x200019ebc680 with size: 0.000244 MiB 00:06:40.294 element at address: 0x20001b48fac0 with size: 0.000244 MiB 00:06:40.294 element at address: 0x20001b48fbc0 with size: 0.000244 MiB 00:06:40.294 element at address: 0x20001b48fcc0 with size: 0.000244 MiB 00:06:40.294 element at address: 0x20001b48fdc0 with size: 0.000244 MiB 00:06:40.294 element at address: 0x20001b48fec0 with size: 0.000244 MiB 00:06:40.294 element at address: 0x20001b48ffc0 with size: 0.000244 MiB 00:06:40.294 element at address: 0x20001b4900c0 with size: 0.000244 MiB 00:06:40.294 element at address: 0x20001b4901c0 with size: 0.000244 MiB 00:06:40.294 element at address: 0x20001b4902c0 with size: 0.000244 MiB 00:06:40.294 element at address: 0x20001b4903c0 with size: 0.000244 MiB 00:06:40.294 element at address: 0x20001b4904c0 with size: 0.000244 MiB 00:06:40.294 element at address: 0x20001b4905c0 with size: 0.000244 MiB 00:06:40.294 element at address: 0x20001b4906c0 with size: 0.000244 MiB 00:06:40.294 element at address: 0x20001b4907c0 with size: 0.000244 MiB 00:06:40.294 element at address: 0x20001b4908c0 with size: 0.000244 MiB 00:06:40.294 element at address: 0x20001b4909c0 with size: 0.000244 MiB 00:06:40.294 element at address: 0x20001b490ac0 with size: 0.000244 MiB 00:06:40.294 element at address: 0x20001b490bc0 with size: 0.000244 MiB 00:06:40.294 element at address: 0x20001b490cc0 with size: 0.000244 MiB 00:06:40.294 element at address: 0x20001b490dc0 with size: 0.000244 MiB 00:06:40.294 element at address: 0x20001b490ec0 with size: 0.000244 MiB 00:06:40.294 element at address: 0x20001b490fc0 with size: 0.000244 MiB 00:06:40.294 element at address: 0x20001b4910c0 with size: 0.000244 MiB 00:06:40.294 element at address: 0x20001b4911c0 with size: 0.000244 MiB 00:06:40.294 element at address: 0x20001b4912c0 with size: 0.000244 MiB 00:06:40.294 element at address: 0x20001b4913c0 with size: 0.000244 MiB 00:06:40.294 element at address: 0x20001b4914c0 with size: 0.000244 MiB 00:06:40.294 element at address: 0x20001b4915c0 with size: 0.000244 MiB 00:06:40.294 element at address: 0x20001b4916c0 with size: 0.000244 MiB 00:06:40.294 element at address: 0x20001b4917c0 with size: 0.000244 MiB 00:06:40.294 element at address: 0x20001b4918c0 with size: 0.000244 MiB 00:06:40.294 element at address: 0x20001b4919c0 with size: 0.000244 MiB 00:06:40.294 element at address: 0x20001b491ac0 with size: 0.000244 MiB 00:06:40.294 element at address: 0x20001b491bc0 with size: 0.000244 MiB 00:06:40.294 element at address: 0x20001b491cc0 with size: 0.000244 MiB 00:06:40.294 element at address: 0x20001b491dc0 with size: 0.000244 MiB 00:06:40.294 element at address: 0x20001b491ec0 with size: 0.000244 MiB 00:06:40.294 element at address: 0x20001b491fc0 with size: 0.000244 MiB 00:06:40.294 element at address: 0x20001b4920c0 with size: 0.000244 MiB 00:06:40.294 element at address: 0x20001b4921c0 with size: 0.000244 MiB 00:06:40.294 element at address: 0x20001b4922c0 with size: 0.000244 MiB 00:06:40.294 element at address: 0x20001b4923c0 with size: 0.000244 MiB 00:06:40.294 element at address: 0x20001b4924c0 with size: 0.000244 MiB 00:06:40.294 element at address: 0x20001b4925c0 with size: 0.000244 MiB 00:06:40.294 element at address: 0x20001b4926c0 with size: 0.000244 MiB 00:06:40.294 element at address: 0x20001b4927c0 with size: 0.000244 MiB 00:06:40.294 element at address: 0x20001b4928c0 with size: 0.000244 MiB 00:06:40.294 element at address: 0x20001b4929c0 with size: 0.000244 MiB 00:06:40.294 element at address: 0x20001b492ac0 with size: 0.000244 MiB 00:06:40.294 element at address: 0x20001b492bc0 with size: 0.000244 MiB 00:06:40.294 element at address: 0x20001b492cc0 with size: 0.000244 MiB 00:06:40.294 element at address: 0x20001b492dc0 with size: 0.000244 MiB 00:06:40.294 element at address: 0x20001b492ec0 with size: 0.000244 MiB 00:06:40.294 element at address: 0x20001b492fc0 with size: 0.000244 MiB 00:06:40.294 element at address: 0x20001b4930c0 with size: 0.000244 MiB 00:06:40.294 element at address: 0x20001b4931c0 with size: 0.000244 MiB 00:06:40.294 element at address: 0x20001b4932c0 with size: 0.000244 MiB 00:06:40.294 element at address: 0x20001b4933c0 with size: 0.000244 MiB 00:06:40.294 element at address: 0x20001b4934c0 with size: 0.000244 MiB 00:06:40.294 element at address: 0x20001b4935c0 with size: 0.000244 MiB 00:06:40.294 element at address: 0x20001b4936c0 with size: 0.000244 MiB 00:06:40.294 element at address: 0x20001b4937c0 with size: 0.000244 MiB 00:06:40.294 element at address: 0x20001b4938c0 with size: 0.000244 MiB 00:06:40.294 element at address: 0x20001b4939c0 with size: 0.000244 MiB 00:06:40.294 element at address: 0x20001b493ac0 with size: 0.000244 MiB 00:06:40.294 element at address: 0x20001b493bc0 with size: 0.000244 MiB 00:06:40.294 element at address: 0x20001b493cc0 with size: 0.000244 MiB 00:06:40.294 element at address: 0x20001b493dc0 with size: 0.000244 MiB 00:06:40.294 element at address: 0x20001b493ec0 with size: 0.000244 MiB 00:06:40.294 element at address: 0x20001b493fc0 with size: 0.000244 MiB 00:06:40.294 element at address: 0x20001b4940c0 with size: 0.000244 MiB 00:06:40.294 element at address: 0x20001b4941c0 with size: 0.000244 MiB 00:06:40.294 element at address: 0x20001b4942c0 with size: 0.000244 MiB 00:06:40.294 element at address: 0x20001b4943c0 with size: 0.000244 MiB 00:06:40.294 element at address: 0x20001b4944c0 with size: 0.000244 MiB 00:06:40.294 element at address: 0x20001b4945c0 with size: 0.000244 MiB 00:06:40.294 element at address: 0x20001b4946c0 with size: 0.000244 MiB 00:06:40.294 element at address: 0x20001b4947c0 with size: 0.000244 MiB 00:06:40.294 element at address: 0x20001b4948c0 with size: 0.000244 MiB 00:06:40.294 element at address: 0x20001b4949c0 with size: 0.000244 MiB 00:06:40.294 element at address: 0x20001b494ac0 with size: 0.000244 MiB 00:06:40.294 element at address: 0x20001b494bc0 with size: 0.000244 MiB 00:06:40.294 element at address: 0x20001b494cc0 with size: 0.000244 MiB 00:06:40.294 element at address: 0x20001b494dc0 with size: 0.000244 MiB 00:06:40.294 element at address: 0x20001b494ec0 with size: 0.000244 MiB 00:06:40.294 element at address: 0x20001b494fc0 with size: 0.000244 MiB 00:06:40.294 element at address: 0x20001b4950c0 with size: 0.000244 MiB 00:06:40.294 element at address: 0x20001b4951c0 with size: 0.000244 MiB 00:06:40.294 element at address: 0x20001b4952c0 with size: 0.000244 MiB 00:06:40.294 element at address: 0x20001b4953c0 with size: 0.000244 MiB 00:06:40.294 element at address: 0x200028863f40 with size: 0.000244 MiB 00:06:40.294 element at address: 0x200028864040 with size: 0.000244 MiB 00:06:40.294 element at address: 0x20002886ad00 with size: 0.000244 MiB 00:06:40.294 element at address: 0x20002886af80 with size: 0.000244 MiB 00:06:40.294 element at address: 0x20002886b080 with size: 0.000244 MiB 00:06:40.294 element at address: 0x20002886b180 with size: 0.000244 MiB 00:06:40.294 element at address: 0x20002886b280 with size: 0.000244 MiB 00:06:40.294 element at address: 0x20002886b380 with size: 0.000244 MiB 00:06:40.294 element at address: 0x20002886b480 with size: 0.000244 MiB 00:06:40.294 element at address: 0x20002886b580 with size: 0.000244 MiB 00:06:40.294 element at address: 0x20002886b680 with size: 0.000244 MiB 00:06:40.294 element at address: 0x20002886b780 with size: 0.000244 MiB 00:06:40.294 element at address: 0x20002886b880 with size: 0.000244 MiB 00:06:40.294 element at address: 0x20002886b980 with size: 0.000244 MiB 00:06:40.294 element at address: 0x20002886ba80 with size: 0.000244 MiB 00:06:40.294 element at address: 0x20002886bb80 with size: 0.000244 MiB 00:06:40.294 element at address: 0x20002886bc80 with size: 0.000244 MiB 00:06:40.294 element at address: 0x20002886bd80 with size: 0.000244 MiB 00:06:40.294 element at address: 0x20002886be80 with size: 0.000244 MiB 00:06:40.294 element at address: 0x20002886bf80 with size: 0.000244 MiB 00:06:40.294 element at address: 0x20002886c080 with size: 0.000244 MiB 00:06:40.294 element at address: 0x20002886c180 with size: 0.000244 MiB 00:06:40.294 element at address: 0x20002886c280 with size: 0.000244 MiB 00:06:40.294 element at address: 0x20002886c380 with size: 0.000244 MiB 00:06:40.294 element at address: 0x20002886c480 with size: 0.000244 MiB 00:06:40.294 element at address: 0x20002886c580 with size: 0.000244 MiB 00:06:40.295 element at address: 0x20002886c680 with size: 0.000244 MiB 00:06:40.295 element at address: 0x20002886c780 with size: 0.000244 MiB 00:06:40.295 element at address: 0x20002886c880 with size: 0.000244 MiB 00:06:40.295 element at address: 0x20002886c980 with size: 0.000244 MiB 00:06:40.295 element at address: 0x20002886ca80 with size: 0.000244 MiB 00:06:40.295 element at address: 0x20002886cb80 with size: 0.000244 MiB 00:06:40.295 element at address: 0x20002886cc80 with size: 0.000244 MiB 00:06:40.295 element at address: 0x20002886cd80 with size: 0.000244 MiB 00:06:40.295 element at address: 0x20002886ce80 with size: 0.000244 MiB 00:06:40.295 element at address: 0x20002886cf80 with size: 0.000244 MiB 00:06:40.295 element at address: 0x20002886d080 with size: 0.000244 MiB 00:06:40.295 element at address: 0x20002886d180 with size: 0.000244 MiB 00:06:40.295 element at address: 0x20002886d280 with size: 0.000244 MiB 00:06:40.295 element at address: 0x20002886d380 with size: 0.000244 MiB 00:06:40.295 element at address: 0x20002886d480 with size: 0.000244 MiB 00:06:40.295 element at address: 0x20002886d580 with size: 0.000244 MiB 00:06:40.295 element at address: 0x20002886d680 with size: 0.000244 MiB 00:06:40.295 element at address: 0x20002886d780 with size: 0.000244 MiB 00:06:40.295 element at address: 0x20002886d880 with size: 0.000244 MiB 00:06:40.295 element at address: 0x20002886d980 with size: 0.000244 MiB 00:06:40.295 element at address: 0x20002886da80 with size: 0.000244 MiB 00:06:40.295 element at address: 0x20002886db80 with size: 0.000244 MiB 00:06:40.295 element at address: 0x20002886dc80 with size: 0.000244 MiB 00:06:40.295 element at address: 0x20002886dd80 with size: 0.000244 MiB 00:06:40.295 element at address: 0x20002886de80 with size: 0.000244 MiB 00:06:40.295 element at address: 0x20002886df80 with size: 0.000244 MiB 00:06:40.295 element at address: 0x20002886e080 with size: 0.000244 MiB 00:06:40.295 element at address: 0x20002886e180 with size: 0.000244 MiB 00:06:40.295 element at address: 0x20002886e280 with size: 0.000244 MiB 00:06:40.295 element at address: 0x20002886e380 with size: 0.000244 MiB 00:06:40.295 element at address: 0x20002886e480 with size: 0.000244 MiB 00:06:40.295 element at address: 0x20002886e580 with size: 0.000244 MiB 00:06:40.295 element at address: 0x20002886e680 with size: 0.000244 MiB 00:06:40.295 element at address: 0x20002886e780 with size: 0.000244 MiB 00:06:40.295 element at address: 0x20002886e880 with size: 0.000244 MiB 00:06:40.295 element at address: 0x20002886e980 with size: 0.000244 MiB 00:06:40.295 element at address: 0x20002886ea80 with size: 0.000244 MiB 00:06:40.295 element at address: 0x20002886eb80 with size: 0.000244 MiB 00:06:40.295 element at address: 0x20002886ec80 with size: 0.000244 MiB 00:06:40.295 element at address: 0x20002886ed80 with size: 0.000244 MiB 00:06:40.295 element at address: 0x20002886ee80 with size: 0.000244 MiB 00:06:40.295 element at address: 0x20002886ef80 with size: 0.000244 MiB 00:06:40.295 element at address: 0x20002886f080 with size: 0.000244 MiB 00:06:40.295 element at address: 0x20002886f180 with size: 0.000244 MiB 00:06:40.295 element at address: 0x20002886f280 with size: 0.000244 MiB 00:06:40.295 element at address: 0x20002886f380 with size: 0.000244 MiB 00:06:40.295 element at address: 0x20002886f480 with size: 0.000244 MiB 00:06:40.295 element at address: 0x20002886f580 with size: 0.000244 MiB 00:06:40.295 element at address: 0x20002886f680 with size: 0.000244 MiB 00:06:40.295 element at address: 0x20002886f780 with size: 0.000244 MiB 00:06:40.295 element at address: 0x20002886f880 with size: 0.000244 MiB 00:06:40.295 element at address: 0x20002886f980 with size: 0.000244 MiB 00:06:40.295 element at address: 0x20002886fa80 with size: 0.000244 MiB 00:06:40.295 element at address: 0x20002886fb80 with size: 0.000244 MiB 00:06:40.295 element at address: 0x20002886fc80 with size: 0.000244 MiB 00:06:40.295 element at address: 0x20002886fd80 with size: 0.000244 MiB 00:06:40.295 element at address: 0x20002886fe80 with size: 0.000244 MiB 00:06:40.295 list of memzone associated elements. size: 607.930908 MiB 00:06:40.295 element at address: 0x20001b4954c0 with size: 211.416809 MiB 00:06:40.295 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:40.295 element at address: 0x20002886ff80 with size: 157.562622 MiB 00:06:40.295 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:40.295 element at address: 0x200012df1e40 with size: 100.055115 MiB 00:06:40.295 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_58037_0 00:06:40.295 element at address: 0x200000dff340 with size: 48.003113 MiB 00:06:40.295 associated memzone info: size: 48.002930 MiB name: MP_msgpool_58037_0 00:06:40.295 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:06:40.295 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_58037_0 00:06:40.295 element at address: 0x200019fbe900 with size: 20.255615 MiB 00:06:40.295 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:40.295 element at address: 0x2000327feb00 with size: 18.005127 MiB 00:06:40.295 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:40.295 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:06:40.295 associated memzone info: size: 3.000122 MiB name: MP_evtpool_58037_0 00:06:40.295 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:06:40.295 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_58037 00:06:40.295 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:06:40.295 associated memzone info: size: 1.007996 MiB name: MP_evtpool_58037 00:06:40.295 element at address: 0x2000196fde00 with size: 1.008179 MiB 00:06:40.295 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:40.295 element at address: 0x200019ebc780 with size: 1.008179 MiB 00:06:40.295 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:40.295 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:06:40.295 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:40.295 element at address: 0x200012cefcc0 with size: 1.008179 MiB 00:06:40.295 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:40.295 element at address: 0x200000cff100 with size: 1.000549 MiB 00:06:40.295 associated memzone info: size: 1.000366 MiB name: RG_ring_0_58037 00:06:40.295 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:06:40.295 associated memzone info: size: 1.000366 MiB name: RG_ring_1_58037 00:06:40.295 element at address: 0x200019affd40 with size: 1.000549 MiB 00:06:40.295 associated memzone info: size: 1.000366 MiB name: RG_ring_4_58037 00:06:40.295 element at address: 0x2000326fe8c0 with size: 1.000549 MiB 00:06:40.295 associated memzone info: size: 1.000366 MiB name: RG_ring_5_58037 00:06:40.295 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:06:40.295 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_58037 00:06:40.295 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:06:40.295 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_58037 00:06:40.295 element at address: 0x20001967dac0 with size: 0.500549 MiB 00:06:40.295 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:40.295 element at address: 0x200012c6f980 with size: 0.500549 MiB 00:06:40.295 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:40.295 element at address: 0x200019e7c440 with size: 0.250549 MiB 00:06:40.295 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:40.295 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:06:40.295 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_58037 00:06:40.295 element at address: 0x20000085df80 with size: 0.125549 MiB 00:06:40.295 associated memzone info: size: 0.125366 MiB name: RG_ring_2_58037 00:06:40.295 element at address: 0x2000192f5ac0 with size: 0.031799 MiB 00:06:40.295 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:40.295 element at address: 0x200028864140 with size: 0.023804 MiB 00:06:40.295 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:40.295 element at address: 0x200000859d40 with size: 0.016174 MiB 00:06:40.295 associated memzone info: size: 0.015991 MiB name: RG_ring_3_58037 00:06:40.295 element at address: 0x20002886a2c0 with size: 0.002502 MiB 00:06:40.295 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:40.295 element at address: 0x2000004ffa40 with size: 0.000366 MiB 00:06:40.295 associated memzone info: size: 0.000183 MiB name: MP_msgpool_58037 00:06:40.296 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:06:40.296 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_58037 00:06:40.296 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:06:40.296 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_58037 00:06:40.296 element at address: 0x20002886ae00 with size: 0.000366 MiB 00:06:40.296 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:40.296 21:33:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:40.296 21:33:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 58037 00:06:40.296 21:33:40 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 58037 ']' 00:06:40.296 21:33:40 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 58037 00:06:40.296 21:33:40 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:06:40.296 21:33:40 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:40.296 21:33:40 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58037 00:06:40.296 21:33:40 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:40.296 21:33:40 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:40.296 21:33:40 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58037' 00:06:40.296 killing process with pid 58037 00:06:40.296 21:33:40 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 58037 00:06:40.296 21:33:40 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 58037 00:06:43.584 ************************************ 00:06:43.584 END TEST dpdk_mem_utility 00:06:43.584 ************************************ 00:06:43.584 00:06:43.584 real 0m4.454s 00:06:43.584 user 0m4.420s 00:06:43.584 sys 0m0.562s 00:06:43.584 21:33:43 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:43.584 21:33:43 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:43.584 21:33:43 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:43.584 21:33:43 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:43.584 21:33:43 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:43.584 21:33:43 -- common/autotest_common.sh@10 -- # set +x 00:06:43.584 ************************************ 00:06:43.584 START TEST event 00:06:43.584 ************************************ 00:06:43.584 21:33:43 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:43.584 * Looking for test storage... 00:06:43.584 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:43.584 21:33:43 event -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:43.584 21:33:43 event -- common/autotest_common.sh@1711 -- # lcov --version 00:06:43.584 21:33:43 event -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:43.584 21:33:43 event -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:43.584 21:33:43 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:43.584 21:33:43 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:43.584 21:33:43 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:43.584 21:33:43 event -- scripts/common.sh@336 -- # IFS=.-: 00:06:43.584 21:33:43 event -- scripts/common.sh@336 -- # read -ra ver1 00:06:43.584 21:33:43 event -- scripts/common.sh@337 -- # IFS=.-: 00:06:43.584 21:33:43 event -- scripts/common.sh@337 -- # read -ra ver2 00:06:43.584 21:33:43 event -- scripts/common.sh@338 -- # local 'op=<' 00:06:43.584 21:33:43 event -- scripts/common.sh@340 -- # ver1_l=2 00:06:43.584 21:33:43 event -- scripts/common.sh@341 -- # ver2_l=1 00:06:43.584 21:33:43 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:43.584 21:33:43 event -- scripts/common.sh@344 -- # case "$op" in 00:06:43.584 21:33:43 event -- scripts/common.sh@345 -- # : 1 00:06:43.584 21:33:43 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:43.584 21:33:43 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:43.584 21:33:43 event -- scripts/common.sh@365 -- # decimal 1 00:06:43.584 21:33:43 event -- scripts/common.sh@353 -- # local d=1 00:06:43.584 21:33:43 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:43.584 21:33:43 event -- scripts/common.sh@355 -- # echo 1 00:06:43.584 21:33:43 event -- scripts/common.sh@365 -- # ver1[v]=1 00:06:43.584 21:33:43 event -- scripts/common.sh@366 -- # decimal 2 00:06:43.584 21:33:43 event -- scripts/common.sh@353 -- # local d=2 00:06:43.584 21:33:43 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:43.584 21:33:43 event -- scripts/common.sh@355 -- # echo 2 00:06:43.584 21:33:43 event -- scripts/common.sh@366 -- # ver2[v]=2 00:06:43.584 21:33:43 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:43.584 21:33:43 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:43.584 21:33:43 event -- scripts/common.sh@368 -- # return 0 00:06:43.584 21:33:43 event -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:43.584 21:33:43 event -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:43.584 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:43.584 --rc genhtml_branch_coverage=1 00:06:43.584 --rc genhtml_function_coverage=1 00:06:43.584 --rc genhtml_legend=1 00:06:43.584 --rc geninfo_all_blocks=1 00:06:43.584 --rc geninfo_unexecuted_blocks=1 00:06:43.584 00:06:43.584 ' 00:06:43.584 21:33:43 event -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:43.584 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:43.584 --rc genhtml_branch_coverage=1 00:06:43.584 --rc genhtml_function_coverage=1 00:06:43.584 --rc genhtml_legend=1 00:06:43.584 --rc geninfo_all_blocks=1 00:06:43.584 --rc geninfo_unexecuted_blocks=1 00:06:43.584 00:06:43.584 ' 00:06:43.584 21:33:43 event -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:43.584 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:43.584 --rc genhtml_branch_coverage=1 00:06:43.584 --rc genhtml_function_coverage=1 00:06:43.584 --rc genhtml_legend=1 00:06:43.584 --rc geninfo_all_blocks=1 00:06:43.584 --rc geninfo_unexecuted_blocks=1 00:06:43.584 00:06:43.584 ' 00:06:43.584 21:33:43 event -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:43.584 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:43.584 --rc genhtml_branch_coverage=1 00:06:43.584 --rc genhtml_function_coverage=1 00:06:43.584 --rc genhtml_legend=1 00:06:43.584 --rc geninfo_all_blocks=1 00:06:43.584 --rc geninfo_unexecuted_blocks=1 00:06:43.584 00:06:43.584 ' 00:06:43.584 21:33:43 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:43.584 21:33:43 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:43.584 21:33:43 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:43.584 21:33:43 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:06:43.584 21:33:43 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:43.584 21:33:43 event -- common/autotest_common.sh@10 -- # set +x 00:06:43.584 ************************************ 00:06:43.584 START TEST event_perf 00:06:43.584 ************************************ 00:06:43.584 21:33:43 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:43.584 Running I/O for 1 seconds...[2024-12-10 21:33:44.035765] Starting SPDK v25.01-pre git sha1 cec5ba284 / DPDK 24.03.0 initialization... 00:06:43.584 [2024-12-10 21:33:44.036000] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58150 ] 00:06:43.584 [2024-12-10 21:33:44.219062] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:43.584 [2024-12-10 21:33:44.358019] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:43.584 [2024-12-10 21:33:44.358326] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.584 [2024-12-10 21:33:44.358245] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:06:43.584 Running I/O for 1 seconds...[2024-12-10 21:33:44.358362] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:06:44.961 00:06:44.961 lcore 0: 179390 00:06:44.961 lcore 1: 179390 00:06:44.961 lcore 2: 179389 00:06:44.961 lcore 3: 179389 00:06:44.961 done. 00:06:44.961 00:06:44.961 real 0m1.636s 00:06:44.961 user 0m4.384s 00:06:44.961 sys 0m0.121s 00:06:44.961 21:33:45 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:44.961 21:33:45 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:44.961 ************************************ 00:06:44.961 END TEST event_perf 00:06:44.961 ************************************ 00:06:44.961 21:33:45 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:44.961 21:33:45 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:06:44.961 21:33:45 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:44.961 21:33:45 event -- common/autotest_common.sh@10 -- # set +x 00:06:44.961 ************************************ 00:06:44.961 START TEST event_reactor 00:06:44.961 ************************************ 00:06:44.961 21:33:45 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:44.961 [2024-12-10 21:33:45.738489] Starting SPDK v25.01-pre git sha1 cec5ba284 / DPDK 24.03.0 initialization... 00:06:44.961 [2024-12-10 21:33:45.738708] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58190 ] 00:06:45.220 [2024-12-10 21:33:45.906757] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.525 [2024-12-10 21:33:46.033656] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.908 test_start 00:06:46.908 oneshot 00:06:46.908 tick 100 00:06:46.908 tick 100 00:06:46.908 tick 250 00:06:46.908 tick 100 00:06:46.908 tick 100 00:06:46.908 tick 100 00:06:46.908 tick 250 00:06:46.908 tick 500 00:06:46.908 tick 100 00:06:46.908 tick 100 00:06:46.908 tick 250 00:06:46.908 tick 100 00:06:46.908 tick 100 00:06:46.909 test_end 00:06:46.909 00:06:46.909 real 0m1.587s 00:06:46.909 user 0m1.388s 00:06:46.909 sys 0m0.090s 00:06:46.909 21:33:47 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:46.909 21:33:47 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:46.909 ************************************ 00:06:46.909 END TEST event_reactor 00:06:46.909 ************************************ 00:06:46.909 21:33:47 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:46.909 21:33:47 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:06:46.909 21:33:47 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:46.909 21:33:47 event -- common/autotest_common.sh@10 -- # set +x 00:06:46.909 ************************************ 00:06:46.909 START TEST event_reactor_perf 00:06:46.909 ************************************ 00:06:46.909 21:33:47 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:46.909 [2024-12-10 21:33:47.393900] Starting SPDK v25.01-pre git sha1 cec5ba284 / DPDK 24.03.0 initialization... 00:06:46.909 [2024-12-10 21:33:47.394176] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58232 ] 00:06:46.909 [2024-12-10 21:33:47.574271] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.168 [2024-12-10 21:33:47.692356] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.549 test_start 00:06:48.549 test_end 00:06:48.549 Performance: 331172 events per second 00:06:48.549 00:06:48.549 real 0m1.608s 00:06:48.549 user 0m1.404s 00:06:48.549 sys 0m0.095s 00:06:48.549 ************************************ 00:06:48.549 END TEST event_reactor_perf 00:06:48.549 ************************************ 00:06:48.549 21:33:48 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:48.549 21:33:48 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:48.549 21:33:49 event -- event/event.sh@49 -- # uname -s 00:06:48.549 21:33:49 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:48.549 21:33:49 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:48.549 21:33:49 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:48.549 21:33:49 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:48.549 21:33:49 event -- common/autotest_common.sh@10 -- # set +x 00:06:48.549 ************************************ 00:06:48.549 START TEST event_scheduler 00:06:48.549 ************************************ 00:06:48.549 21:33:49 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:48.549 * Looking for test storage... 00:06:48.549 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:06:48.549 21:33:49 event.event_scheduler -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:48.549 21:33:49 event.event_scheduler -- common/autotest_common.sh@1711 -- # lcov --version 00:06:48.549 21:33:49 event.event_scheduler -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:48.549 21:33:49 event.event_scheduler -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:48.549 21:33:49 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:48.549 21:33:49 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:48.549 21:33:49 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:48.549 21:33:49 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:06:48.549 21:33:49 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:06:48.549 21:33:49 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:06:48.549 21:33:49 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:06:48.549 21:33:49 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:06:48.549 21:33:49 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:06:48.549 21:33:49 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:06:48.549 21:33:49 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:48.549 21:33:49 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:06:48.549 21:33:49 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:06:48.549 21:33:49 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:48.549 21:33:49 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:48.549 21:33:49 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:06:48.549 21:33:49 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:06:48.549 21:33:49 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:48.549 21:33:49 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:06:48.549 21:33:49 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:06:48.549 21:33:49 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:06:48.549 21:33:49 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:06:48.549 21:33:49 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:48.549 21:33:49 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:06:48.549 21:33:49 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:06:48.549 21:33:49 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:48.549 21:33:49 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:48.549 21:33:49 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:06:48.549 21:33:49 event.event_scheduler -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:48.549 21:33:49 event.event_scheduler -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:48.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:48.549 --rc genhtml_branch_coverage=1 00:06:48.549 --rc genhtml_function_coverage=1 00:06:48.549 --rc genhtml_legend=1 00:06:48.549 --rc geninfo_all_blocks=1 00:06:48.549 --rc geninfo_unexecuted_blocks=1 00:06:48.549 00:06:48.549 ' 00:06:48.549 21:33:49 event.event_scheduler -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:48.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:48.549 --rc genhtml_branch_coverage=1 00:06:48.549 --rc genhtml_function_coverage=1 00:06:48.549 --rc genhtml_legend=1 00:06:48.549 --rc geninfo_all_blocks=1 00:06:48.549 --rc geninfo_unexecuted_blocks=1 00:06:48.549 00:06:48.549 ' 00:06:48.549 21:33:49 event.event_scheduler -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:48.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:48.549 --rc genhtml_branch_coverage=1 00:06:48.549 --rc genhtml_function_coverage=1 00:06:48.549 --rc genhtml_legend=1 00:06:48.549 --rc geninfo_all_blocks=1 00:06:48.549 --rc geninfo_unexecuted_blocks=1 00:06:48.549 00:06:48.549 ' 00:06:48.549 21:33:49 event.event_scheduler -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:48.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:48.549 --rc genhtml_branch_coverage=1 00:06:48.549 --rc genhtml_function_coverage=1 00:06:48.549 --rc genhtml_legend=1 00:06:48.549 --rc geninfo_all_blocks=1 00:06:48.549 --rc geninfo_unexecuted_blocks=1 00:06:48.549 00:06:48.549 ' 00:06:48.549 21:33:49 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:48.549 21:33:49 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:48.549 21:33:49 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=58302 00:06:48.549 21:33:49 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:48.549 21:33:49 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 58302 00:06:48.550 21:33:49 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 58302 ']' 00:06:48.550 21:33:49 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:48.550 21:33:49 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:48.550 21:33:49 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:48.550 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:48.550 21:33:49 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:48.550 21:33:49 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:48.809 [2024-12-10 21:33:49.339676] Starting SPDK v25.01-pre git sha1 cec5ba284 / DPDK 24.03.0 initialization... 00:06:48.809 [2024-12-10 21:33:49.339912] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58302 ] 00:06:48.809 [2024-12-10 21:33:49.520444] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:49.069 [2024-12-10 21:33:49.664789] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.069 [2024-12-10 21:33:49.664972] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:49.069 [2024-12-10 21:33:49.665016] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:06:49.069 [2024-12-10 21:33:49.665028] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:06:49.638 21:33:50 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:49.638 21:33:50 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:06:49.638 21:33:50 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:49.638 21:33:50 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:49.638 21:33:50 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:49.638 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:49.638 POWER: Cannot set governor of lcore 0 to userspace 00:06:49.638 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:49.638 POWER: Cannot set governor of lcore 0 to performance 00:06:49.638 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:49.638 POWER: Cannot set governor of lcore 0 to userspace 00:06:49.638 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:49.638 POWER: Cannot set governor of lcore 0 to userspace 00:06:49.638 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:06:49.638 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:06:49.638 POWER: Unable to set Power Management Environment for lcore 0 00:06:49.638 [2024-12-10 21:33:50.266314] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:06:49.638 [2024-12-10 21:33:50.266372] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:06:49.638 [2024-12-10 21:33:50.266412] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:06:49.638 [2024-12-10 21:33:50.266478] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:49.638 [2024-12-10 21:33:50.266516] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:49.638 [2024-12-10 21:33:50.266557] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:49.638 21:33:50 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:49.638 21:33:50 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:49.638 21:33:50 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:49.638 21:33:50 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:49.898 [2024-12-10 21:33:50.642566] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:49.898 21:33:50 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:49.898 21:33:50 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:49.898 21:33:50 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:49.898 21:33:50 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:49.898 21:33:50 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:49.898 ************************************ 00:06:49.898 START TEST scheduler_create_thread 00:06:49.898 ************************************ 00:06:49.898 21:33:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:06:49.898 21:33:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:49.898 21:33:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:49.898 21:33:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:49.898 2 00:06:49.898 21:33:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:49.898 21:33:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:49.898 21:33:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:49.898 21:33:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:49.898 3 00:06:49.898 21:33:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:49.898 21:33:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:49.898 21:33:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:49.898 21:33:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:50.158 4 00:06:50.158 21:33:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:50.158 21:33:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:50.158 21:33:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:50.158 21:33:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:50.158 5 00:06:50.158 21:33:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:50.158 21:33:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:50.158 21:33:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:50.158 21:33:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:50.158 6 00:06:50.158 21:33:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:50.158 21:33:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:50.158 21:33:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:50.158 21:33:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:50.158 7 00:06:50.158 21:33:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:50.158 21:33:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:50.158 21:33:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:50.158 21:33:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:50.158 8 00:06:50.158 21:33:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:50.158 21:33:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:50.158 21:33:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:50.158 21:33:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:50.158 9 00:06:50.158 21:33:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:50.158 21:33:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:50.158 21:33:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:50.158 21:33:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:50.158 10 00:06:50.158 21:33:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:50.158 21:33:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:50.158 21:33:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:50.158 21:33:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:50.158 21:33:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:50.158 21:33:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:50.158 21:33:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:50.158 21:33:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:50.158 21:33:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:50.158 21:33:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:50.158 21:33:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:50.158 21:33:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:50.158 21:33:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:50.158 21:33:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:50.158 21:33:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:50.158 21:33:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:50.158 21:33:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:50.158 21:33:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:51.119 21:33:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:51.119 00:06:51.119 real 0m1.174s 00:06:51.119 user 0m0.015s 00:06:51.119 ************************************ 00:06:51.119 END TEST scheduler_create_thread 00:06:51.119 ************************************ 00:06:51.119 sys 0m0.007s 00:06:51.119 21:33:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:51.119 21:33:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:51.119 21:33:51 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:51.119 21:33:51 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 58302 00:06:51.119 21:33:51 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 58302 ']' 00:06:51.119 21:33:51 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 58302 00:06:51.119 21:33:51 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:06:51.119 21:33:51 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:51.119 21:33:51 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58302 00:06:51.377 killing process with pid 58302 00:06:51.377 21:33:51 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:51.377 21:33:51 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:51.377 21:33:51 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58302' 00:06:51.377 21:33:51 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 58302 00:06:51.377 21:33:51 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 58302 00:06:51.636 [2024-12-10 21:33:52.304255] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:53.016 ************************************ 00:06:53.016 END TEST event_scheduler 00:06:53.016 ************************************ 00:06:53.016 00:06:53.016 real 0m4.668s 00:06:53.016 user 0m9.098s 00:06:53.016 sys 0m0.494s 00:06:53.016 21:33:53 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:53.016 21:33:53 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:53.016 21:33:53 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:53.016 21:33:53 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:53.016 21:33:53 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:53.016 21:33:53 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:53.016 21:33:53 event -- common/autotest_common.sh@10 -- # set +x 00:06:53.016 ************************************ 00:06:53.016 START TEST app_repeat 00:06:53.016 ************************************ 00:06:53.016 21:33:53 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:06:53.016 21:33:53 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:53.016 21:33:53 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:53.016 21:33:53 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:53.016 21:33:53 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:53.016 21:33:53 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:53.016 21:33:53 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:53.016 21:33:53 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:53.016 Process app_repeat pid: 58403 00:06:53.016 spdk_app_start Round 0 00:06:53.016 21:33:53 event.app_repeat -- event/event.sh@19 -- # repeat_pid=58403 00:06:53.016 21:33:53 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:53.016 21:33:53 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:53.016 21:33:53 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 58403' 00:06:53.016 21:33:53 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:53.016 21:33:53 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:53.016 21:33:53 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58403 /var/tmp/spdk-nbd.sock 00:06:53.016 21:33:53 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58403 ']' 00:06:53.016 21:33:53 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:53.016 21:33:53 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:53.016 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:53.016 21:33:53 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:53.016 21:33:53 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:53.016 21:33:53 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:53.274 [2024-12-10 21:33:53.803952] Starting SPDK v25.01-pre git sha1 cec5ba284 / DPDK 24.03.0 initialization... 00:06:53.275 [2024-12-10 21:33:53.804298] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58403 ] 00:06:53.275 [2024-12-10 21:33:53.996717] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:53.534 [2024-12-10 21:33:54.138117] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.534 [2024-12-10 21:33:54.138125] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:54.101 21:33:54 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:54.101 21:33:54 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:54.101 21:33:54 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:54.668 Malloc0 00:06:54.668 21:33:55 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:54.928 Malloc1 00:06:54.928 21:33:55 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:54.928 21:33:55 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:54.928 21:33:55 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:54.928 21:33:55 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:54.928 21:33:55 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:54.928 21:33:55 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:54.928 21:33:55 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:54.928 21:33:55 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:54.928 21:33:55 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:54.928 21:33:55 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:54.928 21:33:55 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:54.928 21:33:55 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:54.928 21:33:55 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:54.928 21:33:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:54.928 21:33:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:54.928 21:33:55 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:55.186 /dev/nbd0 00:06:55.186 21:33:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:55.186 21:33:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:55.186 21:33:55 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:55.186 21:33:55 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:55.186 21:33:55 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:55.186 21:33:55 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:55.186 21:33:55 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:55.186 21:33:55 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:55.186 21:33:55 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:55.186 21:33:55 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:55.186 21:33:55 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:55.444 1+0 records in 00:06:55.444 1+0 records out 00:06:55.444 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000397799 s, 10.3 MB/s 00:06:55.444 21:33:55 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:55.444 21:33:55 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:55.444 21:33:55 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:55.444 21:33:55 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:55.444 21:33:55 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:55.444 21:33:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:55.444 21:33:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:55.444 21:33:55 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:55.702 /dev/nbd1 00:06:55.702 21:33:56 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:55.702 21:33:56 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:55.702 21:33:56 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:55.702 21:33:56 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:55.702 21:33:56 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:55.702 21:33:56 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:55.702 21:33:56 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:55.702 21:33:56 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:55.702 21:33:56 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:55.702 21:33:56 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:55.702 21:33:56 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:55.702 1+0 records in 00:06:55.702 1+0 records out 00:06:55.702 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000287828 s, 14.2 MB/s 00:06:55.702 21:33:56 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:55.702 21:33:56 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:55.702 21:33:56 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:55.702 21:33:56 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:55.702 21:33:56 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:55.702 21:33:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:55.702 21:33:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:55.702 21:33:56 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:55.702 21:33:56 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:55.702 21:33:56 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:55.962 21:33:56 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:55.962 { 00:06:55.962 "nbd_device": "/dev/nbd0", 00:06:55.962 "bdev_name": "Malloc0" 00:06:55.962 }, 00:06:55.962 { 00:06:55.962 "nbd_device": "/dev/nbd1", 00:06:55.962 "bdev_name": "Malloc1" 00:06:55.962 } 00:06:55.962 ]' 00:06:55.962 21:33:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:55.962 { 00:06:55.962 "nbd_device": "/dev/nbd0", 00:06:55.962 "bdev_name": "Malloc0" 00:06:55.962 }, 00:06:55.962 { 00:06:55.962 "nbd_device": "/dev/nbd1", 00:06:55.962 "bdev_name": "Malloc1" 00:06:55.962 } 00:06:55.962 ]' 00:06:55.962 21:33:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:55.962 21:33:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:55.962 /dev/nbd1' 00:06:55.962 21:33:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:55.962 /dev/nbd1' 00:06:55.962 21:33:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:55.962 21:33:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:55.962 21:33:56 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:55.962 21:33:56 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:55.962 21:33:56 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:55.962 21:33:56 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:55.962 21:33:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:55.962 21:33:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:55.962 21:33:56 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:55.962 21:33:56 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:55.962 21:33:56 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:55.962 21:33:56 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:55.962 256+0 records in 00:06:55.962 256+0 records out 00:06:55.962 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00651334 s, 161 MB/s 00:06:55.962 21:33:56 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:55.962 21:33:56 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:55.962 256+0 records in 00:06:55.962 256+0 records out 00:06:55.962 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0215849 s, 48.6 MB/s 00:06:55.962 21:33:56 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:55.962 21:33:56 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:55.962 256+0 records in 00:06:55.962 256+0 records out 00:06:55.962 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0271858 s, 38.6 MB/s 00:06:55.962 21:33:56 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:55.962 21:33:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:55.962 21:33:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:55.962 21:33:56 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:55.962 21:33:56 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:55.962 21:33:56 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:55.962 21:33:56 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:55.962 21:33:56 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:55.962 21:33:56 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:55.962 21:33:56 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:55.962 21:33:56 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:55.962 21:33:56 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:55.962 21:33:56 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:55.962 21:33:56 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:55.962 21:33:56 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:55.962 21:33:56 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:55.962 21:33:56 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:55.963 21:33:56 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:55.963 21:33:56 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:56.222 21:33:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:56.222 21:33:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:56.222 21:33:56 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:56.222 21:33:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:56.222 21:33:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:56.222 21:33:56 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:56.222 21:33:56 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:56.222 21:33:56 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:56.222 21:33:56 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:56.222 21:33:56 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:56.482 21:33:57 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:56.482 21:33:57 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:56.482 21:33:57 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:56.482 21:33:57 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:56.482 21:33:57 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:56.482 21:33:57 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:56.482 21:33:57 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:56.482 21:33:57 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:56.482 21:33:57 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:56.482 21:33:57 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:56.482 21:33:57 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:56.742 21:33:57 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:56.742 21:33:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:56.742 21:33:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:56.742 21:33:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:56.742 21:33:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:56.742 21:33:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:56.742 21:33:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:56.742 21:33:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:56.742 21:33:57 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:56.742 21:33:57 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:56.742 21:33:57 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:56.742 21:33:57 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:56.742 21:33:57 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:57.311 21:33:57 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:58.744 [2024-12-10 21:33:59.350809] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:58.744 [2024-12-10 21:33:59.486580] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:58.744 [2024-12-10 21:33:59.486582] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.004 [2024-12-10 21:33:59.722139] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:59.004 [2024-12-10 21:33:59.722257] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:00.385 spdk_app_start Round 1 00:07:00.385 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:00.385 21:34:00 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:00.385 21:34:00 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:07:00.385 21:34:00 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58403 /var/tmp/spdk-nbd.sock 00:07:00.385 21:34:00 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58403 ']' 00:07:00.385 21:34:00 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:00.385 21:34:00 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:00.385 21:34:00 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:00.385 21:34:00 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:00.385 21:34:00 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:00.644 21:34:01 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:00.644 21:34:01 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:07:00.644 21:34:01 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:00.903 Malloc0 00:07:00.903 21:34:01 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:01.163 Malloc1 00:07:01.163 21:34:01 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:01.163 21:34:01 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:01.163 21:34:01 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:01.163 21:34:01 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:01.163 21:34:01 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:01.163 21:34:01 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:01.163 21:34:01 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:01.163 21:34:01 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:01.163 21:34:01 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:01.163 21:34:01 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:01.163 21:34:01 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:01.163 21:34:01 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:01.163 21:34:01 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:01.163 21:34:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:01.163 21:34:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:01.163 21:34:01 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:01.422 /dev/nbd0 00:07:01.682 21:34:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:01.682 21:34:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:01.682 21:34:02 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:01.682 21:34:02 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:01.682 21:34:02 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:01.682 21:34:02 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:01.682 21:34:02 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:01.682 21:34:02 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:01.682 21:34:02 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:01.682 21:34:02 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:01.682 21:34:02 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:01.682 1+0 records in 00:07:01.682 1+0 records out 00:07:01.682 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000438746 s, 9.3 MB/s 00:07:01.682 21:34:02 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:01.682 21:34:02 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:01.682 21:34:02 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:01.682 21:34:02 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:01.682 21:34:02 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:01.682 21:34:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:01.682 21:34:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:01.682 21:34:02 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:01.941 /dev/nbd1 00:07:01.941 21:34:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:01.941 21:34:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:01.941 21:34:02 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:01.941 21:34:02 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:01.941 21:34:02 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:01.941 21:34:02 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:01.941 21:34:02 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:01.941 21:34:02 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:01.941 21:34:02 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:01.941 21:34:02 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:01.941 21:34:02 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:01.941 1+0 records in 00:07:01.941 1+0 records out 00:07:01.941 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000276006 s, 14.8 MB/s 00:07:01.941 21:34:02 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:01.941 21:34:02 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:01.941 21:34:02 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:01.941 21:34:02 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:01.941 21:34:02 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:01.941 21:34:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:01.941 21:34:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:01.941 21:34:02 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:01.941 21:34:02 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:01.941 21:34:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:02.200 21:34:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:02.200 { 00:07:02.200 "nbd_device": "/dev/nbd0", 00:07:02.200 "bdev_name": "Malloc0" 00:07:02.200 }, 00:07:02.200 { 00:07:02.200 "nbd_device": "/dev/nbd1", 00:07:02.200 "bdev_name": "Malloc1" 00:07:02.200 } 00:07:02.200 ]' 00:07:02.200 21:34:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:02.200 { 00:07:02.200 "nbd_device": "/dev/nbd0", 00:07:02.200 "bdev_name": "Malloc0" 00:07:02.200 }, 00:07:02.200 { 00:07:02.200 "nbd_device": "/dev/nbd1", 00:07:02.200 "bdev_name": "Malloc1" 00:07:02.200 } 00:07:02.200 ]' 00:07:02.200 21:34:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:02.200 21:34:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:02.200 /dev/nbd1' 00:07:02.200 21:34:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:02.200 21:34:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:02.200 /dev/nbd1' 00:07:02.200 21:34:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:02.200 21:34:02 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:02.200 21:34:02 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:02.200 21:34:02 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:02.200 21:34:02 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:02.200 21:34:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:02.200 21:34:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:02.200 21:34:02 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:02.200 21:34:02 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:02.200 21:34:02 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:02.200 21:34:02 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:02.200 256+0 records in 00:07:02.200 256+0 records out 00:07:02.200 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00622202 s, 169 MB/s 00:07:02.200 21:34:02 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:02.200 21:34:02 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:02.200 256+0 records in 00:07:02.200 256+0 records out 00:07:02.200 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0234826 s, 44.7 MB/s 00:07:02.200 21:34:02 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:02.200 21:34:02 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:02.200 256+0 records in 00:07:02.200 256+0 records out 00:07:02.200 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0239323 s, 43.8 MB/s 00:07:02.200 21:34:02 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:02.200 21:34:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:02.200 21:34:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:02.200 21:34:02 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:02.200 21:34:02 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:02.200 21:34:02 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:02.200 21:34:02 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:02.200 21:34:02 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:02.200 21:34:02 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:07:02.200 21:34:02 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:02.200 21:34:02 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:07:02.200 21:34:02 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:02.200 21:34:02 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:02.200 21:34:02 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:02.200 21:34:02 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:02.200 21:34:02 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:02.200 21:34:02 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:02.200 21:34:02 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:02.200 21:34:02 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:02.459 21:34:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:02.459 21:34:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:02.459 21:34:03 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:02.459 21:34:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:02.459 21:34:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:02.459 21:34:03 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:02.760 21:34:03 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:02.760 21:34:03 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:02.760 21:34:03 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:02.760 21:34:03 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:03.030 21:34:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:03.030 21:34:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:03.030 21:34:03 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:03.030 21:34:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:03.030 21:34:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:03.030 21:34:03 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:03.030 21:34:03 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:03.030 21:34:03 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:03.030 21:34:03 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:03.030 21:34:03 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:03.030 21:34:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:03.289 21:34:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:03.289 21:34:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:03.289 21:34:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:03.289 21:34:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:03.289 21:34:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:03.289 21:34:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:03.289 21:34:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:03.289 21:34:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:03.289 21:34:03 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:03.289 21:34:03 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:03.289 21:34:03 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:03.289 21:34:03 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:03.289 21:34:03 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:03.856 21:34:04 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:05.233 [2024-12-10 21:34:05.746543] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:05.233 [2024-12-10 21:34:05.880040] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.233 [2024-12-10 21:34:05.880060] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:07:05.493 [2024-12-10 21:34:06.104162] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:05.493 [2024-12-10 21:34:06.104282] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:06.937 spdk_app_start Round 2 00:07:06.937 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:06.937 21:34:07 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:06.937 21:34:07 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:07:06.937 21:34:07 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58403 /var/tmp/spdk-nbd.sock 00:07:06.937 21:34:07 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58403 ']' 00:07:06.937 21:34:07 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:06.937 21:34:07 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:06.937 21:34:07 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:06.937 21:34:07 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:06.937 21:34:07 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:06.937 21:34:07 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:06.937 21:34:07 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:07:06.937 21:34:07 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:07.196 Malloc0 00:07:07.196 21:34:07 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:07.761 Malloc1 00:07:07.761 21:34:08 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:07.761 21:34:08 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:07.761 21:34:08 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:07.761 21:34:08 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:07.761 21:34:08 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:07.761 21:34:08 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:07.761 21:34:08 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:07.761 21:34:08 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:07.761 21:34:08 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:07.761 21:34:08 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:07.761 21:34:08 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:07.761 21:34:08 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:07.761 21:34:08 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:07.761 21:34:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:07.761 21:34:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:07.761 21:34:08 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:08.019 /dev/nbd0 00:07:08.019 21:34:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:08.019 21:34:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:08.019 21:34:08 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:08.019 21:34:08 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:08.019 21:34:08 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:08.019 21:34:08 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:08.019 21:34:08 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:08.019 21:34:08 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:08.019 21:34:08 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:08.019 21:34:08 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:08.019 21:34:08 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:08.019 1+0 records in 00:07:08.019 1+0 records out 00:07:08.019 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000250101 s, 16.4 MB/s 00:07:08.019 21:34:08 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:08.019 21:34:08 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:08.019 21:34:08 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:08.019 21:34:08 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:08.019 21:34:08 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:08.019 21:34:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:08.019 21:34:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:08.019 21:34:08 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:08.277 /dev/nbd1 00:07:08.277 21:34:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:08.277 21:34:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:08.277 21:34:08 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:08.277 21:34:08 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:08.277 21:34:08 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:08.277 21:34:08 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:08.277 21:34:08 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:08.277 21:34:08 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:08.277 21:34:08 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:08.277 21:34:08 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:08.277 21:34:08 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:08.277 1+0 records in 00:07:08.277 1+0 records out 00:07:08.277 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000401362 s, 10.2 MB/s 00:07:08.277 21:34:08 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:08.277 21:34:08 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:08.277 21:34:08 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:08.277 21:34:08 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:08.277 21:34:08 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:08.277 21:34:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:08.277 21:34:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:08.277 21:34:08 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:08.277 21:34:08 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:08.277 21:34:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:08.535 21:34:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:08.535 { 00:07:08.535 "nbd_device": "/dev/nbd0", 00:07:08.535 "bdev_name": "Malloc0" 00:07:08.535 }, 00:07:08.535 { 00:07:08.535 "nbd_device": "/dev/nbd1", 00:07:08.535 "bdev_name": "Malloc1" 00:07:08.535 } 00:07:08.535 ]' 00:07:08.535 21:34:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:08.535 { 00:07:08.535 "nbd_device": "/dev/nbd0", 00:07:08.535 "bdev_name": "Malloc0" 00:07:08.535 }, 00:07:08.535 { 00:07:08.535 "nbd_device": "/dev/nbd1", 00:07:08.535 "bdev_name": "Malloc1" 00:07:08.535 } 00:07:08.535 ]' 00:07:08.535 21:34:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:08.535 21:34:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:08.535 /dev/nbd1' 00:07:08.535 21:34:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:08.535 /dev/nbd1' 00:07:08.535 21:34:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:08.535 21:34:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:08.535 21:34:09 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:08.535 21:34:09 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:08.535 21:34:09 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:08.535 21:34:09 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:08.535 21:34:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:08.535 21:34:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:08.535 21:34:09 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:08.535 21:34:09 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:08.535 21:34:09 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:08.535 21:34:09 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:08.535 256+0 records in 00:07:08.535 256+0 records out 00:07:08.535 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0136931 s, 76.6 MB/s 00:07:08.535 21:34:09 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:08.535 21:34:09 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:08.535 256+0 records in 00:07:08.535 256+0 records out 00:07:08.535 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.024264 s, 43.2 MB/s 00:07:08.535 21:34:09 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:08.535 21:34:09 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:08.535 256+0 records in 00:07:08.535 256+0 records out 00:07:08.535 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0302144 s, 34.7 MB/s 00:07:08.536 21:34:09 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:08.536 21:34:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:08.536 21:34:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:08.536 21:34:09 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:08.536 21:34:09 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:08.536 21:34:09 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:08.536 21:34:09 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:08.536 21:34:09 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:08.536 21:34:09 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:07:08.536 21:34:09 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:08.536 21:34:09 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:07:08.795 21:34:09 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:08.795 21:34:09 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:08.795 21:34:09 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:08.795 21:34:09 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:08.795 21:34:09 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:08.795 21:34:09 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:08.795 21:34:09 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:08.795 21:34:09 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:09.054 21:34:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:09.054 21:34:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:09.054 21:34:09 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:09.054 21:34:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:09.054 21:34:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:09.054 21:34:09 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:09.054 21:34:09 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:09.054 21:34:09 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:09.054 21:34:09 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:09.054 21:34:09 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:09.313 21:34:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:09.313 21:34:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:09.314 21:34:09 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:09.314 21:34:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:09.314 21:34:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:09.314 21:34:09 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:09.314 21:34:09 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:09.314 21:34:09 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:09.314 21:34:09 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:09.314 21:34:09 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:09.314 21:34:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:09.572 21:34:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:09.572 21:34:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:09.572 21:34:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:09.572 21:34:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:09.572 21:34:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:09.572 21:34:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:09.572 21:34:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:09.572 21:34:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:09.572 21:34:10 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:09.572 21:34:10 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:09.572 21:34:10 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:09.572 21:34:10 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:09.572 21:34:10 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:10.140 21:34:10 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:11.519 [2024-12-10 21:34:12.000616] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:11.519 [2024-12-10 21:34:12.131916] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:07:11.519 [2024-12-10 21:34:12.131918] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.778 [2024-12-10 21:34:12.356005] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:11.778 [2024-12-10 21:34:12.356107] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:13.160 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:13.160 21:34:13 event.app_repeat -- event/event.sh@38 -- # waitforlisten 58403 /var/tmp/spdk-nbd.sock 00:07:13.160 21:34:13 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58403 ']' 00:07:13.160 21:34:13 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:13.160 21:34:13 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:13.160 21:34:13 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:13.160 21:34:13 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:13.160 21:34:13 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:13.160 21:34:13 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:13.160 21:34:13 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:07:13.160 21:34:13 event.app_repeat -- event/event.sh@39 -- # killprocess 58403 00:07:13.160 21:34:13 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 58403 ']' 00:07:13.160 21:34:13 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 58403 00:07:13.160 21:34:13 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:07:13.160 21:34:13 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:13.160 21:34:13 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58403 00:07:13.418 killing process with pid 58403 00:07:13.418 21:34:13 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:13.418 21:34:13 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:13.418 21:34:13 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58403' 00:07:13.418 21:34:13 event.app_repeat -- common/autotest_common.sh@973 -- # kill 58403 00:07:13.418 21:34:13 event.app_repeat -- common/autotest_common.sh@978 -- # wait 58403 00:07:14.354 spdk_app_start is called in Round 0. 00:07:14.354 Shutdown signal received, stop current app iteration 00:07:14.354 Starting SPDK v25.01-pre git sha1 cec5ba284 / DPDK 24.03.0 reinitialization... 00:07:14.354 spdk_app_start is called in Round 1. 00:07:14.354 Shutdown signal received, stop current app iteration 00:07:14.354 Starting SPDK v25.01-pre git sha1 cec5ba284 / DPDK 24.03.0 reinitialization... 00:07:14.354 spdk_app_start is called in Round 2. 00:07:14.354 Shutdown signal received, stop current app iteration 00:07:14.354 Starting SPDK v25.01-pre git sha1 cec5ba284 / DPDK 24.03.0 reinitialization... 00:07:14.354 spdk_app_start is called in Round 3. 00:07:14.354 Shutdown signal received, stop current app iteration 00:07:14.613 21:34:15 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:07:14.613 21:34:15 event.app_repeat -- event/event.sh@42 -- # return 0 00:07:14.613 00:07:14.613 real 0m21.421s 00:07:14.613 user 0m46.737s 00:07:14.613 sys 0m3.065s 00:07:14.613 21:34:15 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:14.613 21:34:15 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:14.613 ************************************ 00:07:14.613 END TEST app_repeat 00:07:14.613 ************************************ 00:07:14.613 21:34:15 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:07:14.613 21:34:15 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:07:14.613 21:34:15 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:14.613 21:34:15 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:14.613 21:34:15 event -- common/autotest_common.sh@10 -- # set +x 00:07:14.613 ************************************ 00:07:14.613 START TEST cpu_locks 00:07:14.613 ************************************ 00:07:14.613 21:34:15 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:07:14.613 * Looking for test storage... 00:07:14.613 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:07:14.613 21:34:15 event.cpu_locks -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:14.613 21:34:15 event.cpu_locks -- common/autotest_common.sh@1711 -- # lcov --version 00:07:14.613 21:34:15 event.cpu_locks -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:14.872 21:34:15 event.cpu_locks -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:14.872 21:34:15 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:14.872 21:34:15 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:14.872 21:34:15 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:14.872 21:34:15 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:07:14.872 21:34:15 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:07:14.872 21:34:15 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:07:14.872 21:34:15 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:07:14.872 21:34:15 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:07:14.872 21:34:15 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:07:14.872 21:34:15 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:07:14.872 21:34:15 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:14.872 21:34:15 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:07:14.872 21:34:15 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:07:14.872 21:34:15 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:14.872 21:34:15 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:14.872 21:34:15 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:07:14.872 21:34:15 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:07:14.872 21:34:15 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:14.872 21:34:15 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:07:14.872 21:34:15 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:07:14.872 21:34:15 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:07:14.872 21:34:15 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:07:14.872 21:34:15 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:14.872 21:34:15 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:07:14.872 21:34:15 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:07:14.872 21:34:15 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:14.872 21:34:15 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:14.872 21:34:15 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:07:14.872 21:34:15 event.cpu_locks -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:14.872 21:34:15 event.cpu_locks -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:14.872 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:14.872 --rc genhtml_branch_coverage=1 00:07:14.872 --rc genhtml_function_coverage=1 00:07:14.872 --rc genhtml_legend=1 00:07:14.872 --rc geninfo_all_blocks=1 00:07:14.872 --rc geninfo_unexecuted_blocks=1 00:07:14.872 00:07:14.872 ' 00:07:14.872 21:34:15 event.cpu_locks -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:14.872 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:14.872 --rc genhtml_branch_coverage=1 00:07:14.872 --rc genhtml_function_coverage=1 00:07:14.872 --rc genhtml_legend=1 00:07:14.872 --rc geninfo_all_blocks=1 00:07:14.872 --rc geninfo_unexecuted_blocks=1 00:07:14.872 00:07:14.872 ' 00:07:14.872 21:34:15 event.cpu_locks -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:14.872 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:14.872 --rc genhtml_branch_coverage=1 00:07:14.872 --rc genhtml_function_coverage=1 00:07:14.872 --rc genhtml_legend=1 00:07:14.872 --rc geninfo_all_blocks=1 00:07:14.872 --rc geninfo_unexecuted_blocks=1 00:07:14.872 00:07:14.872 ' 00:07:14.872 21:34:15 event.cpu_locks -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:14.872 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:14.872 --rc genhtml_branch_coverage=1 00:07:14.872 --rc genhtml_function_coverage=1 00:07:14.872 --rc genhtml_legend=1 00:07:14.872 --rc geninfo_all_blocks=1 00:07:14.872 --rc geninfo_unexecuted_blocks=1 00:07:14.872 00:07:14.872 ' 00:07:14.872 21:34:15 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:07:14.872 21:34:15 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:07:14.872 21:34:15 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:07:14.872 21:34:15 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:07:14.872 21:34:15 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:14.872 21:34:15 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:14.872 21:34:15 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:14.872 ************************************ 00:07:14.872 START TEST default_locks 00:07:14.872 ************************************ 00:07:14.872 21:34:15 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:07:14.872 21:34:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=58872 00:07:14.872 21:34:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:14.872 21:34:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 58872 00:07:14.872 21:34:15 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58872 ']' 00:07:14.872 21:34:15 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:14.872 21:34:15 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:14.872 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:14.872 21:34:15 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:14.872 21:34:15 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:14.872 21:34:15 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:14.872 [2024-12-10 21:34:15.578557] Starting SPDK v25.01-pre git sha1 cec5ba284 / DPDK 24.03.0 initialization... 00:07:14.872 [2024-12-10 21:34:15.578690] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58872 ] 00:07:15.131 [2024-12-10 21:34:15.754834] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.131 [2024-12-10 21:34:15.888758] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.508 21:34:16 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:16.508 21:34:16 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:07:16.508 21:34:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 58872 00:07:16.508 21:34:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 58872 00:07:16.508 21:34:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:16.508 21:34:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 58872 00:07:16.508 21:34:17 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 58872 ']' 00:07:16.508 21:34:17 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 58872 00:07:16.508 21:34:17 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:07:16.508 21:34:17 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:16.508 21:34:17 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58872 00:07:16.508 21:34:17 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:16.508 21:34:17 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:16.508 21:34:17 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58872' 00:07:16.508 killing process with pid 58872 00:07:16.508 21:34:17 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 58872 00:07:16.508 21:34:17 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 58872 00:07:19.800 21:34:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 58872 00:07:19.800 21:34:19 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:07:19.800 21:34:19 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 58872 00:07:19.800 21:34:19 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:07:19.800 21:34:19 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:19.800 21:34:19 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:07:19.800 21:34:19 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:19.800 21:34:19 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 58872 00:07:19.800 21:34:19 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58872 ']' 00:07:19.800 21:34:19 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:19.800 21:34:19 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:19.800 21:34:19 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:19.800 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:19.800 21:34:19 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:19.800 21:34:19 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:19.800 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (58872) - No such process 00:07:19.800 ERROR: process (pid: 58872) is no longer running 00:07:19.800 21:34:19 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:19.800 21:34:19 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:07:19.800 21:34:19 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:07:19.800 21:34:19 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:19.800 21:34:19 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:19.800 21:34:19 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:19.800 21:34:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:07:19.800 21:34:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:19.800 21:34:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:07:19.800 21:34:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:19.800 00:07:19.800 real 0m4.504s 00:07:19.800 user 0m4.468s 00:07:19.800 sys 0m0.645s 00:07:19.800 21:34:19 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:19.800 21:34:19 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:19.800 ************************************ 00:07:19.800 END TEST default_locks 00:07:19.800 ************************************ 00:07:19.800 21:34:20 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:07:19.800 21:34:20 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:19.800 21:34:20 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:19.800 21:34:20 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:19.800 ************************************ 00:07:19.800 START TEST default_locks_via_rpc 00:07:19.800 ************************************ 00:07:19.800 21:34:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:07:19.800 21:34:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=58953 00:07:19.800 21:34:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 58953 00:07:19.800 21:34:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 58953 ']' 00:07:19.800 21:34:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:19.800 21:34:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:19.800 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:19.800 21:34:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:19.800 21:34:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:19.800 21:34:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:19.800 21:34:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:19.800 [2024-12-10 21:34:20.139891] Starting SPDK v25.01-pre git sha1 cec5ba284 / DPDK 24.03.0 initialization... 00:07:19.800 [2024-12-10 21:34:20.140058] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58953 ] 00:07:19.800 [2024-12-10 21:34:20.320304] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.800 [2024-12-10 21:34:20.450719] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.738 21:34:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:20.738 21:34:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:20.738 21:34:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:07:20.738 21:34:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.738 21:34:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:20.738 21:34:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.738 21:34:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:07:20.738 21:34:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:20.738 21:34:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:07:20.738 21:34:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:20.738 21:34:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:07:20.738 21:34:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.738 21:34:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:20.738 21:34:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.738 21:34:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 58953 00:07:20.738 21:34:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 58953 00:07:20.738 21:34:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:20.998 21:34:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 58953 00:07:20.998 21:34:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 58953 ']' 00:07:20.998 21:34:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 58953 00:07:20.998 21:34:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:07:20.998 21:34:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:20.998 21:34:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58953 00:07:20.998 21:34:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:20.998 21:34:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:20.998 killing process with pid 58953 00:07:20.998 21:34:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58953' 00:07:20.998 21:34:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 58953 00:07:20.998 21:34:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 58953 00:07:23.557 00:07:23.557 real 0m4.104s 00:07:23.557 user 0m4.067s 00:07:23.557 sys 0m0.603s 00:07:23.557 21:34:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:23.557 21:34:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:23.557 ************************************ 00:07:23.557 END TEST default_locks_via_rpc 00:07:23.557 ************************************ 00:07:23.557 21:34:24 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:07:23.557 21:34:24 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:23.557 21:34:24 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:23.557 21:34:24 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:23.557 ************************************ 00:07:23.557 START TEST non_locking_app_on_locked_coremask 00:07:23.557 ************************************ 00:07:23.557 21:34:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:07:23.557 21:34:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=59029 00:07:23.557 21:34:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:23.557 21:34:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 59029 /var/tmp/spdk.sock 00:07:23.557 21:34:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59029 ']' 00:07:23.557 21:34:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:23.557 21:34:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:23.557 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:23.557 21:34:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:23.557 21:34:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:23.557 21:34:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:23.557 [2024-12-10 21:34:24.300841] Starting SPDK v25.01-pre git sha1 cec5ba284 / DPDK 24.03.0 initialization... 00:07:23.557 [2024-12-10 21:34:24.301325] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59029 ] 00:07:23.816 [2024-12-10 21:34:24.475448] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.816 [2024-12-10 21:34:24.593634] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.753 21:34:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:24.753 21:34:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:24.753 21:34:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=59045 00:07:24.753 21:34:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 59045 /var/tmp/spdk2.sock 00:07:24.753 21:34:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:07:24.753 21:34:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59045 ']' 00:07:24.753 21:34:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:24.753 21:34:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:24.753 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:24.753 21:34:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:24.753 21:34:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:24.753 21:34:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:25.014 [2024-12-10 21:34:25.583694] Starting SPDK v25.01-pre git sha1 cec5ba284 / DPDK 24.03.0 initialization... 00:07:25.014 [2024-12-10 21:34:25.583807] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59045 ] 00:07:25.014 [2024-12-10 21:34:25.757540] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:25.014 [2024-12-10 21:34:25.757607] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.273 [2024-12-10 21:34:25.993338] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.881 21:34:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:27.881 21:34:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:27.881 21:34:28 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 59029 00:07:27.881 21:34:28 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59029 00:07:27.881 21:34:28 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:28.139 21:34:28 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 59029 00:07:28.139 21:34:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59029 ']' 00:07:28.139 21:34:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59029 00:07:28.139 21:34:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:28.139 21:34:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:28.140 21:34:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59029 00:07:28.399 21:34:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:28.399 21:34:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:28.399 killing process with pid 59029 00:07:28.399 21:34:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59029' 00:07:28.399 21:34:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59029 00:07:28.399 21:34:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59029 00:07:33.671 21:34:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 59045 00:07:33.671 21:34:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59045 ']' 00:07:33.671 21:34:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59045 00:07:33.671 21:34:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:33.671 21:34:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:33.671 21:34:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59045 00:07:33.671 21:34:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:33.671 21:34:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:33.671 killing process with pid 59045 00:07:33.671 21:34:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59045' 00:07:33.671 21:34:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59045 00:07:33.671 21:34:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59045 00:07:35.570 00:07:35.570 real 0m12.027s 00:07:35.570 user 0m12.426s 00:07:35.570 sys 0m1.274s 00:07:35.570 21:34:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:35.570 21:34:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:35.570 ************************************ 00:07:35.570 END TEST non_locking_app_on_locked_coremask 00:07:35.570 ************************************ 00:07:35.570 21:34:36 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:07:35.570 21:34:36 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:35.570 21:34:36 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:35.570 21:34:36 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:35.570 ************************************ 00:07:35.570 START TEST locking_app_on_unlocked_coremask 00:07:35.570 ************************************ 00:07:35.570 21:34:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:07:35.570 21:34:36 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=59196 00:07:35.570 21:34:36 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:07:35.570 21:34:36 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 59196 /var/tmp/spdk.sock 00:07:35.570 21:34:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59196 ']' 00:07:35.570 21:34:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:35.570 21:34:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:35.570 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:35.570 21:34:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:35.570 21:34:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:35.570 21:34:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:35.828 [2024-12-10 21:34:36.389285] Starting SPDK v25.01-pre git sha1 cec5ba284 / DPDK 24.03.0 initialization... 00:07:35.828 [2024-12-10 21:34:36.389414] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59196 ] 00:07:35.828 [2024-12-10 21:34:36.568028] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:35.828 [2024-12-10 21:34:36.568103] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:36.087 [2024-12-10 21:34:36.694094] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.024 21:34:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:37.024 21:34:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:37.024 21:34:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=59218 00:07:37.024 21:34:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 59218 /var/tmp/spdk2.sock 00:07:37.024 21:34:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:37.024 21:34:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59218 ']' 00:07:37.024 21:34:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:37.024 21:34:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:37.024 21:34:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:37.024 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:37.024 21:34:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:37.024 21:34:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:37.024 [2024-12-10 21:34:37.753731] Starting SPDK v25.01-pre git sha1 cec5ba284 / DPDK 24.03.0 initialization... 00:07:37.024 [2024-12-10 21:34:37.753865] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59218 ] 00:07:37.286 [2024-12-10 21:34:37.954918] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.567 [2024-12-10 21:34:38.214866] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.095 21:34:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:40.095 21:34:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:40.095 21:34:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 59218 00:07:40.095 21:34:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59218 00:07:40.095 21:34:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:40.354 21:34:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 59196 00:07:40.354 21:34:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59196 ']' 00:07:40.354 21:34:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 59196 00:07:40.354 21:34:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:40.354 21:34:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:40.354 21:34:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59196 00:07:40.354 21:34:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:40.354 21:34:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:40.354 killing process with pid 59196 00:07:40.354 21:34:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59196' 00:07:40.354 21:34:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 59196 00:07:40.354 21:34:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 59196 00:07:45.621 21:34:46 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 59218 00:07:45.621 21:34:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59218 ']' 00:07:45.621 21:34:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 59218 00:07:45.621 21:34:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:45.621 21:34:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:45.879 21:34:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59218 00:07:45.879 21:34:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:45.879 21:34:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:45.879 21:34:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59218' 00:07:45.879 killing process with pid 59218 00:07:45.879 21:34:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 59218 00:07:45.879 21:34:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 59218 00:07:48.412 00:07:48.412 real 0m12.833s 00:07:48.412 user 0m13.155s 00:07:48.412 sys 0m1.330s 00:07:48.412 21:34:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:48.412 21:34:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:48.412 ************************************ 00:07:48.412 END TEST locking_app_on_unlocked_coremask 00:07:48.412 ************************************ 00:07:48.412 21:34:49 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:07:48.412 21:34:49 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:48.412 21:34:49 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:48.412 21:34:49 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:48.412 ************************************ 00:07:48.412 START TEST locking_app_on_locked_coremask 00:07:48.412 ************************************ 00:07:48.412 21:34:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:07:48.412 21:34:49 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=59377 00:07:48.412 21:34:49 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:48.412 21:34:49 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 59377 /var/tmp/spdk.sock 00:07:48.412 21:34:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59377 ']' 00:07:48.412 21:34:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:48.412 21:34:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:48.412 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:48.412 21:34:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:48.412 21:34:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:48.412 21:34:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:48.671 [2024-12-10 21:34:49.290872] Starting SPDK v25.01-pre git sha1 cec5ba284 / DPDK 24.03.0 initialization... 00:07:48.671 [2024-12-10 21:34:49.291005] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59377 ] 00:07:48.929 [2024-12-10 21:34:49.466632] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:48.929 [2024-12-10 21:34:49.579957] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:49.866 21:34:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:49.866 21:34:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:49.866 21:34:50 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=59398 00:07:49.866 21:34:50 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:49.866 21:34:50 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 59398 /var/tmp/spdk2.sock 00:07:49.866 21:34:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:07:49.866 21:34:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59398 /var/tmp/spdk2.sock 00:07:49.866 21:34:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:07:49.866 21:34:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:49.866 21:34:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:07:49.866 21:34:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:49.866 21:34:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59398 /var/tmp/spdk2.sock 00:07:49.866 21:34:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59398 ']' 00:07:49.866 21:34:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:49.866 21:34:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:49.866 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:49.866 21:34:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:49.866 21:34:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:49.866 21:34:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:49.866 [2024-12-10 21:34:50.635623] Starting SPDK v25.01-pre git sha1 cec5ba284 / DPDK 24.03.0 initialization... 00:07:49.866 [2024-12-10 21:34:50.635749] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59398 ] 00:07:50.126 [2024-12-10 21:34:50.812615] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 59377 has claimed it. 00:07:50.126 [2024-12-10 21:34:50.812688] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:50.693 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59398) - No such process 00:07:50.693 ERROR: process (pid: 59398) is no longer running 00:07:50.693 21:34:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:50.693 21:34:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:07:50.693 21:34:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:07:50.693 21:34:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:50.693 21:34:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:50.693 21:34:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:50.693 21:34:51 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 59377 00:07:50.693 21:34:51 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59377 00:07:50.693 21:34:51 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:50.952 21:34:51 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 59377 00:07:50.952 21:34:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59377 ']' 00:07:50.952 21:34:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59377 00:07:50.952 21:34:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:50.952 21:34:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:50.952 21:34:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59377 00:07:50.952 21:34:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:50.952 21:34:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:50.952 killing process with pid 59377 00:07:50.952 21:34:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59377' 00:07:50.952 21:34:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59377 00:07:50.952 21:34:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59377 00:07:54.239 00:07:54.239 real 0m5.273s 00:07:54.239 user 0m5.515s 00:07:54.239 sys 0m0.782s 00:07:54.239 21:34:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:54.239 21:34:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:54.239 ************************************ 00:07:54.239 END TEST locking_app_on_locked_coremask 00:07:54.239 ************************************ 00:07:54.239 21:34:54 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:07:54.239 21:34:54 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:54.239 21:34:54 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:54.239 21:34:54 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:54.239 ************************************ 00:07:54.239 START TEST locking_overlapped_coremask 00:07:54.239 ************************************ 00:07:54.239 21:34:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:07:54.239 21:34:54 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:07:54.239 21:34:54 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=59468 00:07:54.239 21:34:54 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 59468 /var/tmp/spdk.sock 00:07:54.239 21:34:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59468 ']' 00:07:54.239 21:34:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:54.239 21:34:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:54.239 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:54.239 21:34:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:54.239 21:34:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:54.239 21:34:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:54.239 [2024-12-10 21:34:54.640588] Starting SPDK v25.01-pre git sha1 cec5ba284 / DPDK 24.03.0 initialization... 00:07:54.239 [2024-12-10 21:34:54.640937] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59468 ] 00:07:54.239 [2024-12-10 21:34:54.829054] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:54.239 [2024-12-10 21:34:54.968566] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:07:54.239 [2024-12-10 21:34:54.968669] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:54.239 [2024-12-10 21:34:54.969486] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:07:55.609 21:34:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:55.609 21:34:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:55.609 21:34:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:07:55.609 21:34:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=59497 00:07:55.609 21:34:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 59497 /var/tmp/spdk2.sock 00:07:55.609 21:34:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:07:55.609 21:34:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59497 /var/tmp/spdk2.sock 00:07:55.609 21:34:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:07:55.609 21:34:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:55.609 21:34:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:07:55.609 21:34:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:55.609 21:34:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59497 /var/tmp/spdk2.sock 00:07:55.609 21:34:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59497 ']' 00:07:55.609 21:34:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:55.609 21:34:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:55.609 21:34:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:55.609 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:55.609 21:34:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:55.609 21:34:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:55.609 [2024-12-10 21:34:56.102579] Starting SPDK v25.01-pre git sha1 cec5ba284 / DPDK 24.03.0 initialization... 00:07:55.609 [2024-12-10 21:34:56.102702] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59497 ] 00:07:55.609 [2024-12-10 21:34:56.293472] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59468 has claimed it. 00:07:55.609 [2024-12-10 21:34:56.293791] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:56.174 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59497) - No such process 00:07:56.174 ERROR: process (pid: 59497) is no longer running 00:07:56.174 21:34:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:56.174 21:34:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:07:56.174 21:34:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:07:56.174 21:34:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:56.174 21:34:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:56.174 21:34:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:56.174 21:34:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:07:56.174 21:34:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:56.174 21:34:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:56.174 21:34:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:56.174 21:34:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 59468 00:07:56.174 21:34:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 59468 ']' 00:07:56.174 21:34:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 59468 00:07:56.174 21:34:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:07:56.174 21:34:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:56.174 21:34:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59468 00:07:56.174 21:34:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:56.174 21:34:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:56.174 killing process with pid 59468 00:07:56.174 21:34:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59468' 00:07:56.174 21:34:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 59468 00:07:56.174 21:34:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 59468 00:07:59.476 00:07:59.476 real 0m5.095s 00:07:59.476 user 0m13.865s 00:07:59.476 sys 0m0.667s 00:07:59.476 21:34:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:59.476 21:34:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:59.476 ************************************ 00:07:59.476 END TEST locking_overlapped_coremask 00:07:59.476 ************************************ 00:07:59.476 21:34:59 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:07:59.476 21:34:59 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:59.476 21:34:59 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:59.476 21:34:59 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:59.476 ************************************ 00:07:59.476 START TEST locking_overlapped_coremask_via_rpc 00:07:59.476 ************************************ 00:07:59.476 21:34:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:07:59.476 21:34:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=59561 00:07:59.476 21:34:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 59561 /var/tmp/spdk.sock 00:07:59.476 21:34:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:07:59.476 21:34:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59561 ']' 00:07:59.476 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:59.476 21:34:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:59.476 21:34:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:59.476 21:34:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:59.476 21:34:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:59.476 21:34:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:59.476 [2024-12-10 21:34:59.788571] Starting SPDK v25.01-pre git sha1 cec5ba284 / DPDK 24.03.0 initialization... 00:07:59.476 [2024-12-10 21:34:59.788695] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59561 ] 00:07:59.476 [2024-12-10 21:34:59.978767] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:59.476 [2024-12-10 21:34:59.978854] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:59.476 [2024-12-10 21:35:00.105476] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:07:59.476 [2024-12-10 21:35:00.105595] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:59.476 [2024-12-10 21:35:00.105644] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:08:00.410 21:35:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:00.410 21:35:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:08:00.410 21:35:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=59579 00:08:00.410 21:35:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:08:00.410 21:35:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 59579 /var/tmp/spdk2.sock 00:08:00.410 21:35:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59579 ']' 00:08:00.410 21:35:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:00.410 21:35:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:00.410 21:35:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:00.410 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:00.410 21:35:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:00.410 21:35:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:00.410 [2024-12-10 21:35:01.145282] Starting SPDK v25.01-pre git sha1 cec5ba284 / DPDK 24.03.0 initialization... 00:08:00.410 [2024-12-10 21:35:01.145521] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59579 ] 00:08:00.668 [2024-12-10 21:35:01.327776] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:00.668 [2024-12-10 21:35:01.327860] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:00.925 [2024-12-10 21:35:01.606914] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:08:00.925 [2024-12-10 21:35:01.607033] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:08:00.925 [2024-12-10 21:35:01.607065] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:08:03.450 21:35:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:03.450 21:35:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:08:03.450 21:35:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:08:03.450 21:35:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.450 21:35:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:03.450 21:35:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.450 21:35:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:08:03.450 21:35:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:08:03.450 21:35:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:08:03.450 21:35:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:08:03.450 21:35:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:03.450 21:35:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:08:03.451 21:35:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:03.451 21:35:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:08:03.451 21:35:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.451 21:35:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:03.451 [2024-12-10 21:35:03.860660] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59561 has claimed it. 00:08:03.451 request: 00:08:03.451 { 00:08:03.451 "method": "framework_enable_cpumask_locks", 00:08:03.451 "req_id": 1 00:08:03.451 } 00:08:03.451 Got JSON-RPC error response 00:08:03.451 response: 00:08:03.451 { 00:08:03.451 "code": -32603, 00:08:03.451 "message": "Failed to claim CPU core: 2" 00:08:03.451 } 00:08:03.451 21:35:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:08:03.451 21:35:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:08:03.451 21:35:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:03.451 21:35:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:03.451 21:35:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:03.451 21:35:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 59561 /var/tmp/spdk.sock 00:08:03.451 21:35:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59561 ']' 00:08:03.451 21:35:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:03.451 21:35:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:03.451 21:35:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:03.451 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:03.451 21:35:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:03.451 21:35:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:03.451 21:35:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:03.451 21:35:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:08:03.451 21:35:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 59579 /var/tmp/spdk2.sock 00:08:03.451 21:35:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59579 ']' 00:08:03.451 21:35:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:03.451 21:35:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:03.451 21:35:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:03.451 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:03.451 21:35:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:03.451 21:35:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:03.710 21:35:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:03.710 21:35:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:08:03.710 21:35:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:08:03.710 21:35:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:08:03.710 21:35:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:08:03.710 21:35:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:08:03.710 00:08:03.710 real 0m4.689s 00:08:03.710 user 0m1.552s 00:08:03.710 sys 0m0.206s 00:08:03.710 21:35:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:03.710 21:35:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:03.710 ************************************ 00:08:03.710 END TEST locking_overlapped_coremask_via_rpc 00:08:03.710 ************************************ 00:08:03.710 21:35:04 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:08:03.710 21:35:04 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59561 ]] 00:08:03.710 21:35:04 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59561 00:08:03.710 21:35:04 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59561 ']' 00:08:03.710 21:35:04 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59561 00:08:03.710 21:35:04 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:08:03.710 21:35:04 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:03.710 21:35:04 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59561 00:08:03.710 21:35:04 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:03.710 21:35:04 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:03.710 21:35:04 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59561' 00:08:03.710 killing process with pid 59561 00:08:03.710 21:35:04 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59561 00:08:03.710 21:35:04 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59561 00:08:07.006 21:35:07 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59579 ]] 00:08:07.006 21:35:07 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59579 00:08:07.006 21:35:07 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59579 ']' 00:08:07.006 21:35:07 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59579 00:08:07.006 21:35:07 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:08:07.006 21:35:07 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:07.006 21:35:07 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59579 00:08:07.006 21:35:07 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:08:07.006 21:35:07 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:08:07.006 killing process with pid 59579 00:08:07.006 21:35:07 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59579' 00:08:07.006 21:35:07 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59579 00:08:07.006 21:35:07 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59579 00:08:09.537 21:35:10 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:08:09.537 Process with pid 59561 is not found 00:08:09.537 21:35:10 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:08:09.537 21:35:10 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59561 ]] 00:08:09.537 21:35:10 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59561 00:08:09.537 21:35:10 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59561 ']' 00:08:09.537 21:35:10 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59561 00:08:09.537 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59561) - No such process 00:08:09.537 21:35:10 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59561 is not found' 00:08:09.537 21:35:10 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59579 ]] 00:08:09.537 21:35:10 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59579 00:08:09.537 21:35:10 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59579 ']' 00:08:09.537 21:35:10 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59579 00:08:09.537 Process with pid 59579 is not found 00:08:09.537 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59579) - No such process 00:08:09.537 21:35:10 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59579 is not found' 00:08:09.537 21:35:10 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:08:09.537 ************************************ 00:08:09.537 END TEST cpu_locks 00:08:09.537 ************************************ 00:08:09.537 00:08:09.537 real 0m55.059s 00:08:09.537 user 1m35.916s 00:08:09.537 sys 0m6.740s 00:08:09.537 21:35:10 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:09.537 21:35:10 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:09.796 ************************************ 00:08:09.796 END TEST event 00:08:09.796 ************************************ 00:08:09.796 00:08:09.796 real 1m26.597s 00:08:09.796 user 2m39.179s 00:08:09.796 sys 0m10.986s 00:08:09.796 21:35:10 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:09.796 21:35:10 event -- common/autotest_common.sh@10 -- # set +x 00:08:09.796 21:35:10 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:08:09.796 21:35:10 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:09.796 21:35:10 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:09.796 21:35:10 -- common/autotest_common.sh@10 -- # set +x 00:08:09.796 ************************************ 00:08:09.796 START TEST thread 00:08:09.796 ************************************ 00:08:09.797 21:35:10 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:08:09.797 * Looking for test storage... 00:08:09.797 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:08:09.797 21:35:10 thread -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:09.797 21:35:10 thread -- common/autotest_common.sh@1711 -- # lcov --version 00:08:09.797 21:35:10 thread -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:09.797 21:35:10 thread -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:09.797 21:35:10 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:09.797 21:35:10 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:09.797 21:35:10 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:09.797 21:35:10 thread -- scripts/common.sh@336 -- # IFS=.-: 00:08:09.797 21:35:10 thread -- scripts/common.sh@336 -- # read -ra ver1 00:08:09.797 21:35:10 thread -- scripts/common.sh@337 -- # IFS=.-: 00:08:09.797 21:35:10 thread -- scripts/common.sh@337 -- # read -ra ver2 00:08:09.797 21:35:10 thread -- scripts/common.sh@338 -- # local 'op=<' 00:08:10.055 21:35:10 thread -- scripts/common.sh@340 -- # ver1_l=2 00:08:10.055 21:35:10 thread -- scripts/common.sh@341 -- # ver2_l=1 00:08:10.055 21:35:10 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:10.055 21:35:10 thread -- scripts/common.sh@344 -- # case "$op" in 00:08:10.055 21:35:10 thread -- scripts/common.sh@345 -- # : 1 00:08:10.055 21:35:10 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:10.055 21:35:10 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:10.055 21:35:10 thread -- scripts/common.sh@365 -- # decimal 1 00:08:10.055 21:35:10 thread -- scripts/common.sh@353 -- # local d=1 00:08:10.055 21:35:10 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:10.055 21:35:10 thread -- scripts/common.sh@355 -- # echo 1 00:08:10.056 21:35:10 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:08:10.056 21:35:10 thread -- scripts/common.sh@366 -- # decimal 2 00:08:10.056 21:35:10 thread -- scripts/common.sh@353 -- # local d=2 00:08:10.056 21:35:10 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:10.056 21:35:10 thread -- scripts/common.sh@355 -- # echo 2 00:08:10.056 21:35:10 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:08:10.056 21:35:10 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:10.056 21:35:10 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:10.056 21:35:10 thread -- scripts/common.sh@368 -- # return 0 00:08:10.056 21:35:10 thread -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:10.056 21:35:10 thread -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:10.056 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:10.056 --rc genhtml_branch_coverage=1 00:08:10.056 --rc genhtml_function_coverage=1 00:08:10.056 --rc genhtml_legend=1 00:08:10.056 --rc geninfo_all_blocks=1 00:08:10.056 --rc geninfo_unexecuted_blocks=1 00:08:10.056 00:08:10.056 ' 00:08:10.056 21:35:10 thread -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:10.056 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:10.056 --rc genhtml_branch_coverage=1 00:08:10.056 --rc genhtml_function_coverage=1 00:08:10.056 --rc genhtml_legend=1 00:08:10.056 --rc geninfo_all_blocks=1 00:08:10.056 --rc geninfo_unexecuted_blocks=1 00:08:10.056 00:08:10.056 ' 00:08:10.056 21:35:10 thread -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:10.056 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:10.056 --rc genhtml_branch_coverage=1 00:08:10.056 --rc genhtml_function_coverage=1 00:08:10.056 --rc genhtml_legend=1 00:08:10.056 --rc geninfo_all_blocks=1 00:08:10.056 --rc geninfo_unexecuted_blocks=1 00:08:10.056 00:08:10.056 ' 00:08:10.056 21:35:10 thread -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:10.056 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:10.056 --rc genhtml_branch_coverage=1 00:08:10.056 --rc genhtml_function_coverage=1 00:08:10.056 --rc genhtml_legend=1 00:08:10.056 --rc geninfo_all_blocks=1 00:08:10.056 --rc geninfo_unexecuted_blocks=1 00:08:10.056 00:08:10.056 ' 00:08:10.056 21:35:10 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:08:10.056 21:35:10 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:08:10.056 21:35:10 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:10.056 21:35:10 thread -- common/autotest_common.sh@10 -- # set +x 00:08:10.056 ************************************ 00:08:10.056 START TEST thread_poller_perf 00:08:10.056 ************************************ 00:08:10.056 21:35:10 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:08:10.056 [2024-12-10 21:35:10.652836] Starting SPDK v25.01-pre git sha1 cec5ba284 / DPDK 24.03.0 initialization... 00:08:10.056 [2024-12-10 21:35:10.653061] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59795 ] 00:08:10.056 [2024-12-10 21:35:10.835065] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:10.315 [2024-12-10 21:35:10.970892] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:10.315 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:08:11.696 [2024-12-10T21:35:12.479Z] ====================================== 00:08:11.696 [2024-12-10T21:35:12.479Z] busy:2300660892 (cyc) 00:08:11.696 [2024-12-10T21:35:12.479Z] total_run_count: 319000 00:08:11.696 [2024-12-10T21:35:12.479Z] tsc_hz: 2290000000 (cyc) 00:08:11.696 [2024-12-10T21:35:12.479Z] ====================================== 00:08:11.696 [2024-12-10T21:35:12.479Z] poller_cost: 7212 (cyc), 3149 (nsec) 00:08:11.696 00:08:11.696 real 0m1.630s 00:08:11.696 user 0m1.413s 00:08:11.696 sys 0m0.107s 00:08:11.696 21:35:12 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:11.696 21:35:12 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:08:11.696 ************************************ 00:08:11.696 END TEST thread_poller_perf 00:08:11.696 ************************************ 00:08:11.696 21:35:12 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:08:11.696 21:35:12 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:08:11.697 21:35:12 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:11.697 21:35:12 thread -- common/autotest_common.sh@10 -- # set +x 00:08:11.697 ************************************ 00:08:11.697 START TEST thread_poller_perf 00:08:11.697 ************************************ 00:08:11.697 21:35:12 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:08:11.697 [2024-12-10 21:35:12.342272] Starting SPDK v25.01-pre git sha1 cec5ba284 / DPDK 24.03.0 initialization... 00:08:11.697 [2024-12-10 21:35:12.342640] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59829 ] 00:08:11.956 [2024-12-10 21:35:12.523168] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:11.956 [2024-12-10 21:35:12.675552] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:11.956 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:08:13.336 [2024-12-10T21:35:14.119Z] ====================================== 00:08:13.336 [2024-12-10T21:35:14.119Z] busy:2294257266 (cyc) 00:08:13.336 [2024-12-10T21:35:14.119Z] total_run_count: 4189000 00:08:13.336 [2024-12-10T21:35:14.119Z] tsc_hz: 2290000000 (cyc) 00:08:13.336 [2024-12-10T21:35:14.119Z] ====================================== 00:08:13.336 [2024-12-10T21:35:14.119Z] poller_cost: 547 (cyc), 238 (nsec) 00:08:13.336 00:08:13.336 real 0m1.616s 00:08:13.336 user 0m1.390s 00:08:13.336 sys 0m0.115s 00:08:13.336 21:35:13 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:13.336 21:35:13 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:08:13.336 ************************************ 00:08:13.336 END TEST thread_poller_perf 00:08:13.336 ************************************ 00:08:13.336 21:35:13 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:08:13.336 00:08:13.336 real 0m3.576s 00:08:13.336 user 0m2.964s 00:08:13.336 sys 0m0.398s 00:08:13.336 21:35:13 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:13.336 21:35:13 thread -- common/autotest_common.sh@10 -- # set +x 00:08:13.336 ************************************ 00:08:13.336 END TEST thread 00:08:13.336 ************************************ 00:08:13.336 21:35:14 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:08:13.336 21:35:14 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:08:13.336 21:35:14 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:13.336 21:35:14 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:13.336 21:35:14 -- common/autotest_common.sh@10 -- # set +x 00:08:13.336 ************************************ 00:08:13.336 START TEST app_cmdline 00:08:13.336 ************************************ 00:08:13.336 21:35:14 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:08:13.597 * Looking for test storage... 00:08:13.597 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:08:13.597 21:35:14 app_cmdline -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:13.597 21:35:14 app_cmdline -- common/autotest_common.sh@1711 -- # lcov --version 00:08:13.597 21:35:14 app_cmdline -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:13.597 21:35:14 app_cmdline -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:13.597 21:35:14 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:13.597 21:35:14 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:13.597 21:35:14 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:13.597 21:35:14 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:08:13.597 21:35:14 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:08:13.597 21:35:14 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:08:13.597 21:35:14 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:08:13.597 21:35:14 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:08:13.597 21:35:14 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:08:13.597 21:35:14 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:08:13.597 21:35:14 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:13.597 21:35:14 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:08:13.597 21:35:14 app_cmdline -- scripts/common.sh@345 -- # : 1 00:08:13.597 21:35:14 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:13.597 21:35:14 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:13.597 21:35:14 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:08:13.597 21:35:14 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:08:13.597 21:35:14 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:13.597 21:35:14 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:08:13.597 21:35:14 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:08:13.597 21:35:14 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:08:13.597 21:35:14 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:08:13.597 21:35:14 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:13.597 21:35:14 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:08:13.597 21:35:14 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:08:13.597 21:35:14 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:13.597 21:35:14 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:13.597 21:35:14 app_cmdline -- scripts/common.sh@368 -- # return 0 00:08:13.597 21:35:14 app_cmdline -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:13.597 21:35:14 app_cmdline -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:13.597 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:13.597 --rc genhtml_branch_coverage=1 00:08:13.597 --rc genhtml_function_coverage=1 00:08:13.597 --rc genhtml_legend=1 00:08:13.597 --rc geninfo_all_blocks=1 00:08:13.597 --rc geninfo_unexecuted_blocks=1 00:08:13.597 00:08:13.597 ' 00:08:13.597 21:35:14 app_cmdline -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:13.597 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:13.597 --rc genhtml_branch_coverage=1 00:08:13.597 --rc genhtml_function_coverage=1 00:08:13.597 --rc genhtml_legend=1 00:08:13.597 --rc geninfo_all_blocks=1 00:08:13.597 --rc geninfo_unexecuted_blocks=1 00:08:13.597 00:08:13.597 ' 00:08:13.597 21:35:14 app_cmdline -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:13.597 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:13.597 --rc genhtml_branch_coverage=1 00:08:13.597 --rc genhtml_function_coverage=1 00:08:13.597 --rc genhtml_legend=1 00:08:13.597 --rc geninfo_all_blocks=1 00:08:13.597 --rc geninfo_unexecuted_blocks=1 00:08:13.597 00:08:13.597 ' 00:08:13.597 21:35:14 app_cmdline -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:13.597 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:13.597 --rc genhtml_branch_coverage=1 00:08:13.597 --rc genhtml_function_coverage=1 00:08:13.597 --rc genhtml_legend=1 00:08:13.597 --rc geninfo_all_blocks=1 00:08:13.597 --rc geninfo_unexecuted_blocks=1 00:08:13.597 00:08:13.597 ' 00:08:13.597 21:35:14 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:08:13.597 21:35:14 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=59918 00:08:13.597 21:35:14 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:08:13.597 21:35:14 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 59918 00:08:13.597 21:35:14 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 59918 ']' 00:08:13.597 21:35:14 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:13.597 21:35:14 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:13.597 21:35:14 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:13.597 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:13.597 21:35:14 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:13.597 21:35:14 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:13.597 [2024-12-10 21:35:14.340814] Starting SPDK v25.01-pre git sha1 cec5ba284 / DPDK 24.03.0 initialization... 00:08:13.597 [2024-12-10 21:35:14.341033] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59918 ] 00:08:13.857 [2024-12-10 21:35:14.513730] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:13.857 [2024-12-10 21:35:14.634121] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:14.799 21:35:15 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:14.799 21:35:15 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:08:14.799 21:35:15 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:08:15.058 { 00:08:15.058 "version": "SPDK v25.01-pre git sha1 cec5ba284", 00:08:15.058 "fields": { 00:08:15.058 "major": 25, 00:08:15.058 "minor": 1, 00:08:15.058 "patch": 0, 00:08:15.058 "suffix": "-pre", 00:08:15.058 "commit": "cec5ba284" 00:08:15.058 } 00:08:15.058 } 00:08:15.058 21:35:15 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:08:15.058 21:35:15 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:08:15.058 21:35:15 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:08:15.059 21:35:15 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:08:15.059 21:35:15 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:08:15.059 21:35:15 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.059 21:35:15 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:15.059 21:35:15 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:08:15.059 21:35:15 app_cmdline -- app/cmdline.sh@26 -- # sort 00:08:15.059 21:35:15 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.059 21:35:15 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:08:15.059 21:35:15 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:08:15.059 21:35:15 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:15.059 21:35:15 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:08:15.059 21:35:15 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:15.059 21:35:15 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:15.059 21:35:15 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:15.059 21:35:15 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:15.059 21:35:15 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:15.059 21:35:15 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:15.059 21:35:15 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:15.059 21:35:15 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:15.059 21:35:15 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:08:15.059 21:35:15 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:15.318 request: 00:08:15.318 { 00:08:15.318 "method": "env_dpdk_get_mem_stats", 00:08:15.318 "req_id": 1 00:08:15.318 } 00:08:15.318 Got JSON-RPC error response 00:08:15.318 response: 00:08:15.318 { 00:08:15.318 "code": -32601, 00:08:15.318 "message": "Method not found" 00:08:15.318 } 00:08:15.318 21:35:16 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:08:15.318 21:35:16 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:15.318 21:35:16 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:15.318 21:35:16 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:15.318 21:35:16 app_cmdline -- app/cmdline.sh@1 -- # killprocess 59918 00:08:15.318 21:35:16 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 59918 ']' 00:08:15.318 21:35:16 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 59918 00:08:15.318 21:35:16 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:08:15.318 21:35:16 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:15.318 21:35:16 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59918 00:08:15.318 21:35:16 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:15.318 21:35:16 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:15.318 21:35:16 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59918' 00:08:15.318 killing process with pid 59918 00:08:15.318 21:35:16 app_cmdline -- common/autotest_common.sh@973 -- # kill 59918 00:08:15.318 21:35:16 app_cmdline -- common/autotest_common.sh@978 -- # wait 59918 00:08:17.870 ************************************ 00:08:17.870 END TEST app_cmdline 00:08:17.870 ************************************ 00:08:17.870 00:08:17.870 real 0m4.494s 00:08:17.870 user 0m4.728s 00:08:17.870 sys 0m0.591s 00:08:17.870 21:35:18 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:17.870 21:35:18 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:17.870 21:35:18 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:08:17.870 21:35:18 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:17.870 21:35:18 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:17.870 21:35:18 -- common/autotest_common.sh@10 -- # set +x 00:08:17.870 ************************************ 00:08:17.870 START TEST version 00:08:17.870 ************************************ 00:08:17.870 21:35:18 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:08:18.129 * Looking for test storage... 00:08:18.129 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:08:18.129 21:35:18 version -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:18.130 21:35:18 version -- common/autotest_common.sh@1711 -- # lcov --version 00:08:18.130 21:35:18 version -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:18.130 21:35:18 version -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:18.130 21:35:18 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:18.130 21:35:18 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:18.130 21:35:18 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:18.130 21:35:18 version -- scripts/common.sh@336 -- # IFS=.-: 00:08:18.130 21:35:18 version -- scripts/common.sh@336 -- # read -ra ver1 00:08:18.130 21:35:18 version -- scripts/common.sh@337 -- # IFS=.-: 00:08:18.130 21:35:18 version -- scripts/common.sh@337 -- # read -ra ver2 00:08:18.130 21:35:18 version -- scripts/common.sh@338 -- # local 'op=<' 00:08:18.130 21:35:18 version -- scripts/common.sh@340 -- # ver1_l=2 00:08:18.130 21:35:18 version -- scripts/common.sh@341 -- # ver2_l=1 00:08:18.130 21:35:18 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:18.130 21:35:18 version -- scripts/common.sh@344 -- # case "$op" in 00:08:18.130 21:35:18 version -- scripts/common.sh@345 -- # : 1 00:08:18.130 21:35:18 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:18.130 21:35:18 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:18.130 21:35:18 version -- scripts/common.sh@365 -- # decimal 1 00:08:18.130 21:35:18 version -- scripts/common.sh@353 -- # local d=1 00:08:18.130 21:35:18 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:18.130 21:35:18 version -- scripts/common.sh@355 -- # echo 1 00:08:18.130 21:35:18 version -- scripts/common.sh@365 -- # ver1[v]=1 00:08:18.130 21:35:18 version -- scripts/common.sh@366 -- # decimal 2 00:08:18.130 21:35:18 version -- scripts/common.sh@353 -- # local d=2 00:08:18.130 21:35:18 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:18.130 21:35:18 version -- scripts/common.sh@355 -- # echo 2 00:08:18.130 21:35:18 version -- scripts/common.sh@366 -- # ver2[v]=2 00:08:18.130 21:35:18 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:18.130 21:35:18 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:18.130 21:35:18 version -- scripts/common.sh@368 -- # return 0 00:08:18.130 21:35:18 version -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:18.130 21:35:18 version -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:18.130 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:18.130 --rc genhtml_branch_coverage=1 00:08:18.130 --rc genhtml_function_coverage=1 00:08:18.130 --rc genhtml_legend=1 00:08:18.130 --rc geninfo_all_blocks=1 00:08:18.130 --rc geninfo_unexecuted_blocks=1 00:08:18.130 00:08:18.130 ' 00:08:18.130 21:35:18 version -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:18.130 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:18.130 --rc genhtml_branch_coverage=1 00:08:18.130 --rc genhtml_function_coverage=1 00:08:18.130 --rc genhtml_legend=1 00:08:18.130 --rc geninfo_all_blocks=1 00:08:18.130 --rc geninfo_unexecuted_blocks=1 00:08:18.130 00:08:18.130 ' 00:08:18.130 21:35:18 version -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:18.130 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:18.130 --rc genhtml_branch_coverage=1 00:08:18.130 --rc genhtml_function_coverage=1 00:08:18.130 --rc genhtml_legend=1 00:08:18.130 --rc geninfo_all_blocks=1 00:08:18.130 --rc geninfo_unexecuted_blocks=1 00:08:18.130 00:08:18.130 ' 00:08:18.130 21:35:18 version -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:18.130 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:18.130 --rc genhtml_branch_coverage=1 00:08:18.130 --rc genhtml_function_coverage=1 00:08:18.130 --rc genhtml_legend=1 00:08:18.130 --rc geninfo_all_blocks=1 00:08:18.130 --rc geninfo_unexecuted_blocks=1 00:08:18.130 00:08:18.130 ' 00:08:18.130 21:35:18 version -- app/version.sh@17 -- # get_header_version major 00:08:18.130 21:35:18 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:18.130 21:35:18 version -- app/version.sh@14 -- # cut -f2 00:08:18.130 21:35:18 version -- app/version.sh@14 -- # tr -d '"' 00:08:18.130 21:35:18 version -- app/version.sh@17 -- # major=25 00:08:18.130 21:35:18 version -- app/version.sh@18 -- # get_header_version minor 00:08:18.130 21:35:18 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:18.130 21:35:18 version -- app/version.sh@14 -- # cut -f2 00:08:18.130 21:35:18 version -- app/version.sh@14 -- # tr -d '"' 00:08:18.130 21:35:18 version -- app/version.sh@18 -- # minor=1 00:08:18.130 21:35:18 version -- app/version.sh@19 -- # get_header_version patch 00:08:18.130 21:35:18 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:18.130 21:35:18 version -- app/version.sh@14 -- # cut -f2 00:08:18.130 21:35:18 version -- app/version.sh@14 -- # tr -d '"' 00:08:18.130 21:35:18 version -- app/version.sh@19 -- # patch=0 00:08:18.130 21:35:18 version -- app/version.sh@20 -- # get_header_version suffix 00:08:18.130 21:35:18 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:18.130 21:35:18 version -- app/version.sh@14 -- # cut -f2 00:08:18.130 21:35:18 version -- app/version.sh@14 -- # tr -d '"' 00:08:18.130 21:35:18 version -- app/version.sh@20 -- # suffix=-pre 00:08:18.130 21:35:18 version -- app/version.sh@22 -- # version=25.1 00:08:18.130 21:35:18 version -- app/version.sh@25 -- # (( patch != 0 )) 00:08:18.130 21:35:18 version -- app/version.sh@28 -- # version=25.1rc0 00:08:18.130 21:35:18 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:08:18.130 21:35:18 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:08:18.130 21:35:18 version -- app/version.sh@30 -- # py_version=25.1rc0 00:08:18.130 21:35:18 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:08:18.130 ************************************ 00:08:18.130 END TEST version 00:08:18.130 ************************************ 00:08:18.130 00:08:18.130 real 0m0.312s 00:08:18.130 user 0m0.189s 00:08:18.130 sys 0m0.179s 00:08:18.130 21:35:18 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:18.130 21:35:18 version -- common/autotest_common.sh@10 -- # set +x 00:08:18.389 21:35:18 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:08:18.389 21:35:18 -- spdk/autotest.sh@188 -- # [[ 1 -eq 1 ]] 00:08:18.389 21:35:18 -- spdk/autotest.sh@189 -- # run_test bdev_raid /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:08:18.389 21:35:18 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:18.389 21:35:18 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:18.389 21:35:18 -- common/autotest_common.sh@10 -- # set +x 00:08:18.389 ************************************ 00:08:18.389 START TEST bdev_raid 00:08:18.389 ************************************ 00:08:18.389 21:35:18 bdev_raid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:08:18.389 * Looking for test storage... 00:08:18.389 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:08:18.389 21:35:19 bdev_raid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:18.389 21:35:19 bdev_raid -- common/autotest_common.sh@1711 -- # lcov --version 00:08:18.389 21:35:19 bdev_raid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:18.389 21:35:19 bdev_raid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:18.389 21:35:19 bdev_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:18.389 21:35:19 bdev_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:18.389 21:35:19 bdev_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:18.389 21:35:19 bdev_raid -- scripts/common.sh@336 -- # IFS=.-: 00:08:18.389 21:35:19 bdev_raid -- scripts/common.sh@336 -- # read -ra ver1 00:08:18.389 21:35:19 bdev_raid -- scripts/common.sh@337 -- # IFS=.-: 00:08:18.389 21:35:19 bdev_raid -- scripts/common.sh@337 -- # read -ra ver2 00:08:18.389 21:35:19 bdev_raid -- scripts/common.sh@338 -- # local 'op=<' 00:08:18.389 21:35:19 bdev_raid -- scripts/common.sh@340 -- # ver1_l=2 00:08:18.389 21:35:19 bdev_raid -- scripts/common.sh@341 -- # ver2_l=1 00:08:18.389 21:35:19 bdev_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:18.389 21:35:19 bdev_raid -- scripts/common.sh@344 -- # case "$op" in 00:08:18.389 21:35:19 bdev_raid -- scripts/common.sh@345 -- # : 1 00:08:18.389 21:35:19 bdev_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:18.389 21:35:19 bdev_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:18.389 21:35:19 bdev_raid -- scripts/common.sh@365 -- # decimal 1 00:08:18.389 21:35:19 bdev_raid -- scripts/common.sh@353 -- # local d=1 00:08:18.389 21:35:19 bdev_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:18.389 21:35:19 bdev_raid -- scripts/common.sh@355 -- # echo 1 00:08:18.389 21:35:19 bdev_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:08:18.389 21:35:19 bdev_raid -- scripts/common.sh@366 -- # decimal 2 00:08:18.389 21:35:19 bdev_raid -- scripts/common.sh@353 -- # local d=2 00:08:18.389 21:35:19 bdev_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:18.389 21:35:19 bdev_raid -- scripts/common.sh@355 -- # echo 2 00:08:18.389 21:35:19 bdev_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:08:18.389 21:35:19 bdev_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:18.648 21:35:19 bdev_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:18.648 21:35:19 bdev_raid -- scripts/common.sh@368 -- # return 0 00:08:18.648 21:35:19 bdev_raid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:18.648 21:35:19 bdev_raid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:18.648 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:18.648 --rc genhtml_branch_coverage=1 00:08:18.648 --rc genhtml_function_coverage=1 00:08:18.648 --rc genhtml_legend=1 00:08:18.648 --rc geninfo_all_blocks=1 00:08:18.648 --rc geninfo_unexecuted_blocks=1 00:08:18.648 00:08:18.648 ' 00:08:18.648 21:35:19 bdev_raid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:18.648 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:18.648 --rc genhtml_branch_coverage=1 00:08:18.648 --rc genhtml_function_coverage=1 00:08:18.648 --rc genhtml_legend=1 00:08:18.648 --rc geninfo_all_blocks=1 00:08:18.648 --rc geninfo_unexecuted_blocks=1 00:08:18.648 00:08:18.648 ' 00:08:18.648 21:35:19 bdev_raid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:18.648 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:18.648 --rc genhtml_branch_coverage=1 00:08:18.648 --rc genhtml_function_coverage=1 00:08:18.648 --rc genhtml_legend=1 00:08:18.648 --rc geninfo_all_blocks=1 00:08:18.648 --rc geninfo_unexecuted_blocks=1 00:08:18.648 00:08:18.648 ' 00:08:18.648 21:35:19 bdev_raid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:18.648 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:18.648 --rc genhtml_branch_coverage=1 00:08:18.648 --rc genhtml_function_coverage=1 00:08:18.648 --rc genhtml_legend=1 00:08:18.648 --rc geninfo_all_blocks=1 00:08:18.648 --rc geninfo_unexecuted_blocks=1 00:08:18.648 00:08:18.648 ' 00:08:18.649 21:35:19 bdev_raid -- bdev/bdev_raid.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:08:18.649 21:35:19 bdev_raid -- bdev/nbd_common.sh@6 -- # set -e 00:08:18.649 21:35:19 bdev_raid -- bdev/bdev_raid.sh@14 -- # rpc_py=rpc_cmd 00:08:18.649 21:35:19 bdev_raid -- bdev/bdev_raid.sh@946 -- # mkdir -p /raidtest 00:08:18.649 21:35:19 bdev_raid -- bdev/bdev_raid.sh@947 -- # trap 'cleanup; exit 1' EXIT 00:08:18.649 21:35:19 bdev_raid -- bdev/bdev_raid.sh@949 -- # base_blocklen=512 00:08:18.649 21:35:19 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid1_resize_data_offset_test raid_resize_data_offset_test 00:08:18.649 21:35:19 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:18.649 21:35:19 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:18.649 21:35:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:18.649 ************************************ 00:08:18.649 START TEST raid1_resize_data_offset_test 00:08:18.649 ************************************ 00:08:18.649 21:35:19 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1129 -- # raid_resize_data_offset_test 00:08:18.649 21:35:19 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@916 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:18.649 21:35:19 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@917 -- # raid_pid=60111 00:08:18.649 21:35:19 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@918 -- # echo 'Process raid pid: 60111' 00:08:18.649 Process raid pid: 60111 00:08:18.649 21:35:19 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@919 -- # waitforlisten 60111 00:08:18.649 21:35:19 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@835 -- # '[' -z 60111 ']' 00:08:18.649 21:35:19 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:18.649 21:35:19 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:18.649 21:35:19 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:18.649 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:18.649 21:35:19 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:18.649 21:35:19 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.649 [2024-12-10 21:35:19.296032] Starting SPDK v25.01-pre git sha1 cec5ba284 / DPDK 24.03.0 initialization... 00:08:18.649 [2024-12-10 21:35:19.296262] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:18.907 [2024-12-10 21:35:19.457137] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:18.907 [2024-12-10 21:35:19.583285] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:19.165 [2024-12-10 21:35:19.805266] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:19.165 [2024-12-10 21:35:19.805310] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:19.424 21:35:20 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:19.424 21:35:20 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@868 -- # return 0 00:08:19.424 21:35:20 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@922 -- # rpc_cmd bdev_malloc_create -b malloc0 64 512 -o 16 00:08:19.424 21:35:20 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.424 21:35:20 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.683 malloc0 00:08:19.683 21:35:20 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.683 21:35:20 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@923 -- # rpc_cmd bdev_malloc_create -b malloc1 64 512 -o 16 00:08:19.683 21:35:20 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.683 21:35:20 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.683 malloc1 00:08:19.683 21:35:20 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.683 21:35:20 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@924 -- # rpc_cmd bdev_null_create null0 64 512 00:08:19.683 21:35:20 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.683 21:35:20 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.683 null0 00:08:19.683 21:35:20 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.683 21:35:20 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@926 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''malloc0 malloc1 null0'\''' -s 00:08:19.683 21:35:20 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.683 21:35:20 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.683 [2024-12-10 21:35:20.323205] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc0 is claimed 00:08:19.683 [2024-12-10 21:35:20.325171] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:08:19.683 [2024-12-10 21:35:20.325225] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev null0 is claimed 00:08:19.683 [2024-12-10 21:35:20.325377] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:08:19.683 [2024-12-10 21:35:20.325390] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 129024, blocklen 512 00:08:19.683 [2024-12-10 21:35:20.325687] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:08:19.683 [2024-12-10 21:35:20.325855] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:08:19.683 [2024-12-10 21:35:20.325869] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:08:19.683 [2024-12-10 21:35:20.326018] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:19.683 21:35:20 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.683 21:35:20 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:19.683 21:35:20 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.683 21:35:20 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:08:19.683 21:35:20 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.683 21:35:20 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.683 21:35:20 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # (( 2048 == 2048 )) 00:08:19.683 21:35:20 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@931 -- # rpc_cmd bdev_null_delete null0 00:08:19.683 21:35:20 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.683 21:35:20 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.683 [2024-12-10 21:35:20.379172] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: null0 00:08:19.683 21:35:20 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.683 21:35:20 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@935 -- # rpc_cmd bdev_malloc_create -b malloc2 512 512 -o 30 00:08:19.683 21:35:20 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.683 21:35:20 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.250 malloc2 00:08:20.250 21:35:20 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.250 21:35:20 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@936 -- # rpc_cmd bdev_raid_add_base_bdev Raid malloc2 00:08:20.250 21:35:20 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.250 21:35:20 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.250 [2024-12-10 21:35:20.949321] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:08:20.250 [2024-12-10 21:35:20.966457] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:20.250 21:35:20 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.250 [2024-12-10 21:35:20.968400] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev Raid 00:08:20.250 21:35:20 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:20.250 21:35:20 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:08:20.250 21:35:20 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.250 21:35:20 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.250 21:35:20 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.250 21:35:21 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # (( 2070 == 2070 )) 00:08:20.250 21:35:21 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@941 -- # killprocess 60111 00:08:20.250 21:35:21 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@954 -- # '[' -z 60111 ']' 00:08:20.250 21:35:21 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@958 -- # kill -0 60111 00:08:20.251 21:35:21 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@959 -- # uname 00:08:20.251 21:35:21 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:20.251 21:35:21 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60111 00:08:20.508 killing process with pid 60111 00:08:20.508 21:35:21 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:20.508 21:35:21 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:20.508 21:35:21 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60111' 00:08:20.508 21:35:21 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@973 -- # kill 60111 00:08:20.508 21:35:21 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@978 -- # wait 60111 00:08:20.508 [2024-12-10 21:35:21.056267] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:20.508 [2024-12-10 21:35:21.057690] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev Raid: Operation canceled 00:08:20.508 [2024-12-10 21:35:21.057770] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:20.508 [2024-12-10 21:35:21.057790] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: malloc2 00:08:20.508 [2024-12-10 21:35:21.093694] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:20.508 [2024-12-10 21:35:21.094012] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:20.508 [2024-12-10 21:35:21.094029] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:08:22.410 [2024-12-10 21:35:22.933413] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:23.348 ************************************ 00:08:23.348 END TEST raid1_resize_data_offset_test 00:08:23.348 ************************************ 00:08:23.348 21:35:24 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@943 -- # return 0 00:08:23.348 00:08:23.348 real 0m4.907s 00:08:23.348 user 0m4.820s 00:08:23.348 sys 0m0.534s 00:08:23.348 21:35:24 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:23.348 21:35:24 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.607 21:35:24 bdev_raid -- bdev/bdev_raid.sh@953 -- # run_test raid0_resize_superblock_test raid_resize_superblock_test 0 00:08:23.607 21:35:24 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:23.607 21:35:24 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:23.607 21:35:24 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:23.607 ************************************ 00:08:23.607 START TEST raid0_resize_superblock_test 00:08:23.607 ************************************ 00:08:23.607 21:35:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1129 -- # raid_resize_superblock_test 0 00:08:23.607 21:35:24 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=0 00:08:23.607 21:35:24 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=60195 00:08:23.607 Process raid pid: 60195 00:08:23.607 21:35:24 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:23.607 21:35:24 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 60195' 00:08:23.607 21:35:24 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 60195 00:08:23.607 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:23.607 21:35:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 60195 ']' 00:08:23.607 21:35:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:23.607 21:35:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:23.607 21:35:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:23.607 21:35:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:23.607 21:35:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.607 [2024-12-10 21:35:24.264754] Starting SPDK v25.01-pre git sha1 cec5ba284 / DPDK 24.03.0 initialization... 00:08:23.607 [2024-12-10 21:35:24.264982] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:23.865 [2024-12-10 21:35:24.440044] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:23.865 [2024-12-10 21:35:24.565448] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:24.122 [2024-12-10 21:35:24.788899] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:24.122 [2024-12-10 21:35:24.789032] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:24.381 21:35:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:24.381 21:35:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:08:24.381 21:35:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:08:24.381 21:35:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.381 21:35:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.949 malloc0 00:08:24.949 21:35:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.949 21:35:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:08:24.949 21:35:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.949 21:35:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.949 [2024-12-10 21:35:25.716049] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:08:24.949 [2024-12-10 21:35:25.716187] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:24.949 [2024-12-10 21:35:25.716244] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:08:24.949 [2024-12-10 21:35:25.716290] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:24.949 [2024-12-10 21:35:25.718759] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:24.949 [2024-12-10 21:35:25.718854] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:08:24.949 pt0 00:08:24.949 21:35:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.949 21:35:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:08:24.949 21:35:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.949 21:35:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.208 405f8828-3944-4bd6-80e8-9542b4698bcc 00:08:25.208 21:35:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.208 21:35:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:08:25.208 21:35:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.208 21:35:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.208 06712d09-8d49-4ef8-b533-b8139258012d 00:08:25.208 21:35:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.208 21:35:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:08:25.208 21:35:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.208 21:35:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.208 512b168f-89a0-47b4-914c-f15999eb3258 00:08:25.208 21:35:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.208 21:35:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:08:25.208 21:35:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@870 -- # rpc_cmd bdev_raid_create -n Raid -r 0 -z 64 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:08:25.208 21:35:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.208 21:35:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.208 [2024-12-10 21:35:25.851602] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 06712d09-8d49-4ef8-b533-b8139258012d is claimed 00:08:25.208 [2024-12-10 21:35:25.851716] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 512b168f-89a0-47b4-914c-f15999eb3258 is claimed 00:08:25.208 [2024-12-10 21:35:25.851871] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:08:25.208 [2024-12-10 21:35:25.851889] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 245760, blocklen 512 00:08:25.208 [2024-12-10 21:35:25.852210] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:25.208 [2024-12-10 21:35:25.852428] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:08:25.208 [2024-12-10 21:35:25.852465] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:08:25.208 [2024-12-10 21:35:25.852663] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:25.208 21:35:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.208 21:35:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:08:25.208 21:35:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:08:25.208 21:35:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.208 21:35:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.208 21:35:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.208 21:35:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:08:25.208 21:35:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:08:25.208 21:35:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:08:25.208 21:35:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.208 21:35:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.208 21:35:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.208 21:35:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:08:25.208 21:35:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:08:25.208 21:35:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # rpc_cmd bdev_get_bdevs -b Raid 00:08:25.208 21:35:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.208 21:35:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:08:25.208 21:35:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # jq '.[].num_blocks' 00:08:25.208 21:35:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.208 [2024-12-10 21:35:25.967681] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:25.208 21:35:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.208 21:35:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:08:25.467 21:35:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:08:25.467 21:35:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # (( 245760 == 245760 )) 00:08:25.467 21:35:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:08:25.467 21:35:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.467 21:35:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.467 [2024-12-10 21:35:25.995572] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:08:25.467 [2024-12-10 21:35:25.995648] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '06712d09-8d49-4ef8-b533-b8139258012d' was resized: old size 131072, new size 204800 00:08:25.467 21:35:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.467 21:35:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:08:25.467 21:35:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.467 21:35:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.467 [2024-12-10 21:35:26.007391] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:08:25.467 [2024-12-10 21:35:26.007477] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '512b168f-89a0-47b4-914c-f15999eb3258' was resized: old size 131072, new size 204800 00:08:25.467 [2024-12-10 21:35:26.007543] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 245760 to 393216 00:08:25.467 21:35:26 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.467 21:35:26 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:08:25.467 21:35:26 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.468 21:35:26 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.468 21:35:26 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:08:25.468 21:35:26 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.468 21:35:26 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:08:25.468 21:35:26 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:08:25.468 21:35:26 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:08:25.468 21:35:26 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.468 21:35:26 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.468 21:35:26 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.468 21:35:26 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:08:25.468 21:35:26 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:08:25.468 21:35:26 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # jq '.[].num_blocks' 00:08:25.468 21:35:26 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:08:25.468 21:35:26 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # rpc_cmd bdev_get_bdevs -b Raid 00:08:25.468 21:35:26 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.468 21:35:26 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.468 [2024-12-10 21:35:26.123363] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:25.468 21:35:26 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.468 21:35:26 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:08:25.468 21:35:26 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:08:25.468 21:35:26 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # (( 393216 == 393216 )) 00:08:25.468 21:35:26 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:08:25.468 21:35:26 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.468 21:35:26 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.468 [2024-12-10 21:35:26.155079] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:08:25.468 [2024-12-10 21:35:26.155221] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:08:25.468 [2024-12-10 21:35:26.155255] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:25.468 [2024-12-10 21:35:26.155298] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:08:25.468 [2024-12-10 21:35:26.155450] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:25.468 [2024-12-10 21:35:26.155533] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:25.468 [2024-12-10 21:35:26.155609] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:08:25.468 21:35:26 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.468 21:35:26 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:08:25.468 21:35:26 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.468 21:35:26 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.468 [2024-12-10 21:35:26.166961] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:08:25.468 [2024-12-10 21:35:26.167022] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:25.468 [2024-12-10 21:35:26.167043] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:08:25.468 [2024-12-10 21:35:26.167055] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:25.468 [2024-12-10 21:35:26.169386] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:25.468 [2024-12-10 21:35:26.169442] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:08:25.468 [2024-12-10 21:35:26.171213] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 06712d09-8d49-4ef8-b533-b8139258012d 00:08:25.468 [2024-12-10 21:35:26.171272] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 06712d09-8d49-4ef8-b533-b8139258012d is claimed 00:08:25.468 [2024-12-10 21:35:26.171368] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 512b168f-89a0-47b4-914c-f15999eb3258 00:08:25.468 [2024-12-10 21:35:26.171386] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 512b168f-89a0-47b4-914c-f15999eb3258 is claimed 00:08:25.468 [2024-12-10 21:35:26.171599] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev 512b168f-89a0-47b4-914c-f15999eb3258 (2) smaller than existing raid bdev Raid (3) 00:08:25.468 [2024-12-10 21:35:26.171631] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev 06712d09-8d49-4ef8-b533-b8139258012d: File exists 00:08:25.468 [2024-12-10 21:35:26.171667] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:08:25.468 [2024-12-10 21:35:26.171681] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 393216, blocklen 512 00:08:25.468 pt0 00:08:25.468 [2024-12-10 21:35:26.171955] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:08:25.468 [2024-12-10 21:35:26.172127] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:08:25.468 [2024-12-10 21:35:26.172136] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007b00 00:08:25.468 [2024-12-10 21:35:26.172308] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:25.468 21:35:26 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.468 21:35:26 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:08:25.468 21:35:26 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.468 21:35:26 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.468 21:35:26 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.468 21:35:26 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:08:25.468 21:35:26 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # rpc_cmd bdev_get_bdevs -b Raid 00:08:25.468 21:35:26 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.468 21:35:26 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:08:25.468 21:35:26 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # jq '.[].num_blocks' 00:08:25.468 21:35:26 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.468 [2024-12-10 21:35:26.195593] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:25.468 21:35:26 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.468 21:35:26 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:08:25.468 21:35:26 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:08:25.468 21:35:26 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # (( 393216 == 393216 )) 00:08:25.468 21:35:26 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 60195 00:08:25.468 21:35:26 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 60195 ']' 00:08:25.468 21:35:26 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@958 -- # kill -0 60195 00:08:25.468 21:35:26 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@959 -- # uname 00:08:25.468 21:35:26 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:25.468 21:35:26 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60195 00:08:25.726 killing process with pid 60195 00:08:25.726 21:35:26 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:25.726 21:35:26 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:25.726 21:35:26 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60195' 00:08:25.726 21:35:26 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@973 -- # kill 60195 00:08:25.726 [2024-12-10 21:35:26.267268] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:25.726 [2024-12-10 21:35:26.267362] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:25.726 [2024-12-10 21:35:26.267433] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:25.726 [2024-12-10 21:35:26.267444] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Raid, state offline 00:08:25.726 21:35:26 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@978 -- # wait 60195 00:08:27.105 [2024-12-10 21:35:27.765323] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:28.484 21:35:28 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:08:28.484 00:08:28.484 real 0m4.811s 00:08:28.484 user 0m5.012s 00:08:28.484 sys 0m0.601s 00:08:28.484 21:35:28 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:28.484 21:35:28 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.484 ************************************ 00:08:28.484 END TEST raid0_resize_superblock_test 00:08:28.484 ************************************ 00:08:28.484 21:35:29 bdev_raid -- bdev/bdev_raid.sh@954 -- # run_test raid1_resize_superblock_test raid_resize_superblock_test 1 00:08:28.484 21:35:29 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:28.484 21:35:29 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:28.484 21:35:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:28.484 ************************************ 00:08:28.484 START TEST raid1_resize_superblock_test 00:08:28.484 ************************************ 00:08:28.484 21:35:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1129 -- # raid_resize_superblock_test 1 00:08:28.484 21:35:29 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=1 00:08:28.484 21:35:29 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=60299 00:08:28.484 21:35:29 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 60299' 00:08:28.484 Process raid pid: 60299 00:08:28.484 21:35:29 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:28.484 21:35:29 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 60299 00:08:28.484 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:28.484 21:35:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 60299 ']' 00:08:28.484 21:35:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:28.484 21:35:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:28.484 21:35:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:28.484 21:35:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:28.484 21:35:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.484 [2024-12-10 21:35:29.137016] Starting SPDK v25.01-pre git sha1 cec5ba284 / DPDK 24.03.0 initialization... 00:08:28.484 [2024-12-10 21:35:29.137142] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:28.742 [2024-12-10 21:35:29.292207] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:28.742 [2024-12-10 21:35:29.419072] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:29.000 [2024-12-10 21:35:29.639645] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:29.000 [2024-12-10 21:35:29.639692] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:29.259 21:35:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:29.259 21:35:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:08:29.259 21:35:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:08:29.259 21:35:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.259 21:35:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.195 malloc0 00:08:30.195 21:35:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.195 21:35:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:08:30.195 21:35:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.195 21:35:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.195 [2024-12-10 21:35:30.622853] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:08:30.195 [2024-12-10 21:35:30.622936] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:30.195 [2024-12-10 21:35:30.622960] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:08:30.195 [2024-12-10 21:35:30.622971] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:30.195 [2024-12-10 21:35:30.625307] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:30.195 [2024-12-10 21:35:30.625350] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:08:30.195 pt0 00:08:30.195 21:35:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.195 21:35:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:08:30.195 21:35:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.195 21:35:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.196 29e39d9b-b2e6-4c37-a1b9-3e3a9ff45f3d 00:08:30.196 21:35:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.196 21:35:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:08:30.196 21:35:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.196 21:35:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.196 7d528871-be00-4455-ad09-dfa02d36a3a9 00:08:30.196 21:35:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.196 21:35:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:08:30.196 21:35:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.196 21:35:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.196 daa33816-ee37-4778-9c65-7e743ba0c95e 00:08:30.196 21:35:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.196 21:35:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:08:30.196 21:35:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@871 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:08:30.196 21:35:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.196 21:35:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.196 [2024-12-10 21:35:30.756985] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 7d528871-be00-4455-ad09-dfa02d36a3a9 is claimed 00:08:30.196 [2024-12-10 21:35:30.757096] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev daa33816-ee37-4778-9c65-7e743ba0c95e is claimed 00:08:30.196 [2024-12-10 21:35:30.757246] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:08:30.196 [2024-12-10 21:35:30.757264] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 122880, blocklen 512 00:08:30.196 [2024-12-10 21:35:30.757602] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:30.196 [2024-12-10 21:35:30.757828] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:08:30.196 [2024-12-10 21:35:30.757847] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:08:30.196 [2024-12-10 21:35:30.758018] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:30.196 21:35:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.196 21:35:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:08:30.196 21:35:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:08:30.196 21:35:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.196 21:35:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.196 21:35:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.196 21:35:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:08:30.196 21:35:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:08:30.196 21:35:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:08:30.196 21:35:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.196 21:35:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.196 21:35:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.196 21:35:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:08:30.196 21:35:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:08:30.196 21:35:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # rpc_cmd bdev_get_bdevs -b Raid 00:08:30.196 21:35:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.196 21:35:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.196 21:35:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:08:30.196 21:35:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # jq '.[].num_blocks' 00:08:30.196 [2024-12-10 21:35:30.873087] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:30.196 21:35:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.196 21:35:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:08:30.196 21:35:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:08:30.196 21:35:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # (( 122880 == 122880 )) 00:08:30.196 21:35:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:08:30.196 21:35:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.196 21:35:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.196 [2024-12-10 21:35:30.920982] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:08:30.196 [2024-12-10 21:35:30.921019] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '7d528871-be00-4455-ad09-dfa02d36a3a9' was resized: old size 131072, new size 204800 00:08:30.196 21:35:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.196 21:35:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:08:30.196 21:35:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.196 21:35:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.196 [2024-12-10 21:35:30.932888] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:08:30.196 [2024-12-10 21:35:30.932919] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'daa33816-ee37-4778-9c65-7e743ba0c95e' was resized: old size 131072, new size 204800 00:08:30.196 [2024-12-10 21:35:30.932951] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 122880 to 196608 00:08:30.196 21:35:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.196 21:35:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:08:30.196 21:35:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.196 21:35:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.196 21:35:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:08:30.196 21:35:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.455 21:35:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:08:30.455 21:35:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:08:30.455 21:35:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:08:30.455 21:35:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.455 21:35:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.455 21:35:31 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.455 21:35:31 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:08:30.455 21:35:31 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:08:30.455 21:35:31 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # rpc_cmd bdev_get_bdevs -b Raid 00:08:30.455 21:35:31 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.455 21:35:31 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.455 21:35:31 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:08:30.455 21:35:31 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # jq '.[].num_blocks' 00:08:30.455 [2024-12-10 21:35:31.032862] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:30.455 21:35:31 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.455 21:35:31 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:08:30.455 21:35:31 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:08:30.455 21:35:31 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # (( 196608 == 196608 )) 00:08:30.455 21:35:31 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:08:30.455 21:35:31 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.455 21:35:31 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.455 [2024-12-10 21:35:31.064546] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:08:30.455 [2024-12-10 21:35:31.064640] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:08:30.455 [2024-12-10 21:35:31.064668] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:08:30.455 [2024-12-10 21:35:31.064855] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:30.455 [2024-12-10 21:35:31.065098] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:30.455 [2024-12-10 21:35:31.065183] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:30.455 [2024-12-10 21:35:31.065201] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:08:30.455 21:35:31 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.455 21:35:31 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:08:30.455 21:35:31 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.455 21:35:31 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.455 [2024-12-10 21:35:31.076393] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:08:30.455 [2024-12-10 21:35:31.076478] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:30.455 [2024-12-10 21:35:31.076501] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:08:30.455 [2024-12-10 21:35:31.076514] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:30.455 [2024-12-10 21:35:31.078959] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:30.455 [2024-12-10 21:35:31.079064] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:08:30.455 [2024-12-10 21:35:31.081007] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 7d528871-be00-4455-ad09-dfa02d36a3a9 00:08:30.455 [2024-12-10 21:35:31.081089] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 7d528871-be00-4455-ad09-dfa02d36a3a9 is claimed 00:08:30.455 [2024-12-10 21:35:31.081209] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev daa33816-ee37-4778-9c65-7e743ba0c95e 00:08:30.455 [2024-12-10 21:35:31.081230] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev daa33816-ee37-4778-9c65-7e743ba0c95e is claimed 00:08:30.455 [2024-12-10 21:35:31.081407] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev daa33816-ee37-4778-9c65-7e743ba0c95e (2) smaller than existing raid bdev Raid (3) 00:08:30.455 [2024-12-10 21:35:31.081444] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev 7d528871-be00-4455-ad09-dfa02d36a3a9: File exists 00:08:30.455 [2024-12-10 21:35:31.081487] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:08:30.455 [2024-12-10 21:35:31.081501] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:08:30.455 pt0 00:08:30.455 [2024-12-10 21:35:31.081792] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:08:30.456 [2024-12-10 21:35:31.081982] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:08:30.456 [2024-12-10 21:35:31.081992] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007b00 00:08:30.456 [2024-12-10 21:35:31.082181] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:30.456 21:35:31 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.456 21:35:31 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:08:30.456 21:35:31 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.456 21:35:31 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.456 21:35:31 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.456 21:35:31 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:08:30.456 21:35:31 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # rpc_cmd bdev_get_bdevs -b Raid 00:08:30.456 21:35:31 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.456 21:35:31 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:08:30.456 21:35:31 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # jq '.[].num_blocks' 00:08:30.456 21:35:31 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.456 [2024-12-10 21:35:31.105321] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:30.456 21:35:31 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.456 21:35:31 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:08:30.456 21:35:31 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:08:30.456 21:35:31 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # (( 196608 == 196608 )) 00:08:30.456 21:35:31 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 60299 00:08:30.456 21:35:31 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 60299 ']' 00:08:30.456 21:35:31 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@958 -- # kill -0 60299 00:08:30.456 21:35:31 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@959 -- # uname 00:08:30.456 21:35:31 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:30.456 21:35:31 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60299 00:08:30.456 killing process with pid 60299 00:08:30.456 21:35:31 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:30.456 21:35:31 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:30.456 21:35:31 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60299' 00:08:30.456 21:35:31 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@973 -- # kill 60299 00:08:30.456 [2024-12-10 21:35:31.185136] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:30.456 [2024-12-10 21:35:31.185233] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:30.456 21:35:31 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@978 -- # wait 60299 00:08:30.456 [2024-12-10 21:35:31.185293] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:30.456 [2024-12-10 21:35:31.185303] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Raid, state offline 00:08:32.433 [2024-12-10 21:35:32.740900] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:33.369 21:35:33 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:08:33.369 00:08:33.369 real 0m4.904s 00:08:33.369 user 0m5.103s 00:08:33.369 sys 0m0.594s 00:08:33.369 ************************************ 00:08:33.369 END TEST raid1_resize_superblock_test 00:08:33.369 ************************************ 00:08:33.369 21:35:33 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:33.369 21:35:33 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.369 21:35:33 bdev_raid -- bdev/bdev_raid.sh@956 -- # uname -s 00:08:33.369 21:35:34 bdev_raid -- bdev/bdev_raid.sh@956 -- # '[' Linux = Linux ']' 00:08:33.369 21:35:34 bdev_raid -- bdev/bdev_raid.sh@956 -- # modprobe -n nbd 00:08:33.369 21:35:34 bdev_raid -- bdev/bdev_raid.sh@957 -- # has_nbd=true 00:08:33.369 21:35:34 bdev_raid -- bdev/bdev_raid.sh@958 -- # modprobe nbd 00:08:33.369 21:35:34 bdev_raid -- bdev/bdev_raid.sh@959 -- # run_test raid_function_test_raid0 raid_function_test raid0 00:08:33.369 21:35:34 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:33.369 21:35:34 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:33.369 21:35:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:33.369 ************************************ 00:08:33.369 START TEST raid_function_test_raid0 00:08:33.369 ************************************ 00:08:33.369 21:35:34 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1129 -- # raid_function_test raid0 00:08:33.369 21:35:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@64 -- # local raid_level=raid0 00:08:33.369 21:35:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:08:33.369 21:35:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:08:33.369 21:35:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@69 -- # raid_pid=60402 00:08:33.369 21:35:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:33.369 21:35:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 60402' 00:08:33.369 Process raid pid: 60402 00:08:33.369 21:35:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@71 -- # waitforlisten 60402 00:08:33.369 21:35:34 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@835 -- # '[' -z 60402 ']' 00:08:33.369 21:35:34 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:33.369 21:35:34 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:33.369 21:35:34 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:33.369 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:33.369 21:35:34 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:33.369 21:35:34 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:08:33.369 [2024-12-10 21:35:34.148688] Starting SPDK v25.01-pre git sha1 cec5ba284 / DPDK 24.03.0 initialization... 00:08:33.369 [2024-12-10 21:35:34.148888] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:33.629 [2024-12-10 21:35:34.319247] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:33.888 [2024-12-10 21:35:34.445505] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:33.888 [2024-12-10 21:35:34.663960] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:33.888 [2024-12-10 21:35:34.664006] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:34.456 21:35:35 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:34.456 21:35:35 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@868 -- # return 0 00:08:34.456 21:35:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:08:34.456 21:35:35 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.456 21:35:35 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:08:34.456 Base_1 00:08:34.456 21:35:35 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.456 21:35:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:08:34.456 21:35:35 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.456 21:35:35 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:08:34.456 Base_2 00:08:34.456 21:35:35 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.456 21:35:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''Base_1 Base_2'\''' -n raid 00:08:34.456 21:35:35 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.456 21:35:35 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:08:34.456 [2024-12-10 21:35:35.145388] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:08:34.456 [2024-12-10 21:35:35.147653] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:08:34.456 [2024-12-10 21:35:35.147787] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:08:34.456 [2024-12-10 21:35:35.147838] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:08:34.456 [2024-12-10 21:35:35.148170] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:34.456 [2024-12-10 21:35:35.148389] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:08:34.456 [2024-12-10 21:35:35.148448] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007780 00:08:34.457 [2024-12-10 21:35:35.148690] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:34.457 21:35:35 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.457 21:35:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:08:34.457 21:35:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:08:34.457 21:35:35 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.457 21:35:35 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:08:34.457 21:35:35 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.457 21:35:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:08:34.457 21:35:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:08:34.457 21:35:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:08:34.457 21:35:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:08:34.457 21:35:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:08:34.457 21:35:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:34.457 21:35:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:08:34.457 21:35:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:34.457 21:35:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@12 -- # local i 00:08:34.457 21:35:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:34.457 21:35:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:08:34.457 21:35:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:08:34.715 [2024-12-10 21:35:35.421008] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:08:34.715 /dev/nbd0 00:08:34.715 21:35:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:34.715 21:35:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:34.715 21:35:35 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:08:34.715 21:35:35 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@873 -- # local i 00:08:34.715 21:35:35 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:34.715 21:35:35 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:34.715 21:35:35 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:08:34.715 21:35:35 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@877 -- # break 00:08:34.715 21:35:35 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:34.715 21:35:35 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:34.715 21:35:35 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:34.715 1+0 records in 00:08:34.715 1+0 records out 00:08:34.715 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000266071 s, 15.4 MB/s 00:08:34.715 21:35:35 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:34.715 21:35:35 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@890 -- # size=4096 00:08:34.715 21:35:35 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:34.715 21:35:35 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:34.715 21:35:35 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@893 -- # return 0 00:08:34.715 21:35:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:34.715 21:35:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:08:34.715 21:35:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:08:34.715 21:35:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:08:34.715 21:35:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:08:34.973 21:35:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:34.973 { 00:08:34.973 "nbd_device": "/dev/nbd0", 00:08:34.973 "bdev_name": "raid" 00:08:34.973 } 00:08:34.973 ]' 00:08:34.973 21:35:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:34.973 21:35:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:34.973 { 00:08:34.973 "nbd_device": "/dev/nbd0", 00:08:34.973 "bdev_name": "raid" 00:08:34.973 } 00:08:34.973 ]' 00:08:34.973 21:35:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:08:34.973 21:35:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:34.973 21:35:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:08:34.973 21:35:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=1 00:08:34.973 21:35:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 1 00:08:34.973 21:35:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # count=1 00:08:34.973 21:35:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:08:34.973 21:35:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:08:34.973 21:35:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:08:34.973 21:35:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:08:34.973 21:35:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@19 -- # local blksize 00:08:35.232 21:35:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:08:35.232 21:35:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:08:35.232 21:35:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:08:35.232 21:35:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # blksize=512 00:08:35.232 21:35:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:08:35.232 21:35:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:08:35.232 21:35:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:08:35.232 21:35:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:08:35.232 21:35:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:08:35.232 21:35:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:08:35.232 21:35:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:08:35.232 21:35:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:08:35.232 21:35:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:08:35.232 4096+0 records in 00:08:35.232 4096+0 records out 00:08:35.232 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0293715 s, 71.4 MB/s 00:08:35.232 21:35:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:08:35.491 4096+0 records in 00:08:35.491 4096+0 records out 00:08:35.491 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.226264 s, 9.3 MB/s 00:08:35.491 21:35:36 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:08:35.491 21:35:36 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:08:35.491 21:35:36 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:08:35.491 21:35:36 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:08:35.491 21:35:36 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:08:35.491 21:35:36 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:08:35.491 21:35:36 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:08:35.491 128+0 records in 00:08:35.491 128+0 records out 00:08:35.491 65536 bytes (66 kB, 64 KiB) copied, 0.00124071 s, 52.8 MB/s 00:08:35.491 21:35:36 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:08:35.491 21:35:36 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:08:35.491 21:35:36 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:08:35.491 21:35:36 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:08:35.491 21:35:36 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:08:35.491 21:35:36 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:08:35.491 21:35:36 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:08:35.491 21:35:36 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:08:35.491 2035+0 records in 00:08:35.491 2035+0 records out 00:08:35.491 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0142342 s, 73.2 MB/s 00:08:35.491 21:35:36 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:08:35.491 21:35:36 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:08:35.491 21:35:36 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:08:35.491 21:35:36 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:08:35.491 21:35:36 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:08:35.491 21:35:36 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:08:35.491 21:35:36 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:08:35.491 21:35:36 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:08:35.491 456+0 records in 00:08:35.491 456+0 records out 00:08:35.491 233472 bytes (233 kB, 228 KiB) copied, 0.00403823 s, 57.8 MB/s 00:08:35.491 21:35:36 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:08:35.491 21:35:36 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:08:35.491 21:35:36 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:08:35.491 21:35:36 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:08:35.491 21:35:36 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:08:35.491 21:35:36 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@52 -- # return 0 00:08:35.491 21:35:36 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:08:35.491 21:35:36 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:08:35.491 21:35:36 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:08:35.491 21:35:36 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:35.491 21:35:36 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@51 -- # local i 00:08:35.491 21:35:36 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:35.491 21:35:36 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:08:35.750 [2024-12-10 21:35:36.386950] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:35.750 21:35:36 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:35.750 21:35:36 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:35.750 21:35:36 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:35.750 21:35:36 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:35.750 21:35:36 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:35.750 21:35:36 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:35.750 21:35:36 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@41 -- # break 00:08:35.750 21:35:36 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@45 -- # return 0 00:08:35.750 21:35:36 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:08:35.750 21:35:36 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:08:35.750 21:35:36 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:08:36.009 21:35:36 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:36.009 21:35:36 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:36.009 21:35:36 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:36.009 21:35:36 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:36.009 21:35:36 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo '' 00:08:36.009 21:35:36 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:36.009 21:35:36 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # true 00:08:36.009 21:35:36 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=0 00:08:36.009 21:35:36 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 0 00:08:36.009 21:35:36 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # count=0 00:08:36.009 21:35:36 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:08:36.009 21:35:36 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@97 -- # killprocess 60402 00:08:36.009 21:35:36 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@954 -- # '[' -z 60402 ']' 00:08:36.009 21:35:36 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@958 -- # kill -0 60402 00:08:36.009 21:35:36 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@959 -- # uname 00:08:36.009 21:35:36 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:36.009 21:35:36 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60402 00:08:36.009 21:35:36 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:36.009 21:35:36 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:36.009 21:35:36 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60402' 00:08:36.009 killing process with pid 60402 00:08:36.010 21:35:36 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@973 -- # kill 60402 00:08:36.010 [2024-12-10 21:35:36.738650] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:36.010 [2024-12-10 21:35:36.738820] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:36.010 21:35:36 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@978 -- # wait 60402 00:08:36.010 [2024-12-10 21:35:36.738914] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:36.010 [2024-12-10 21:35:36.738972] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid, state offline 00:08:36.268 [2024-12-10 21:35:36.955706] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:37.673 21:35:38 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@99 -- # return 0 00:08:37.673 00:08:37.673 real 0m4.121s 00:08:37.673 user 0m4.844s 00:08:37.673 sys 0m0.996s 00:08:37.673 21:35:38 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:37.673 21:35:38 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:08:37.673 ************************************ 00:08:37.673 END TEST raid_function_test_raid0 00:08:37.673 ************************************ 00:08:37.673 21:35:38 bdev_raid -- bdev/bdev_raid.sh@960 -- # run_test raid_function_test_concat raid_function_test concat 00:08:37.673 21:35:38 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:37.673 21:35:38 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:37.673 21:35:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:37.673 ************************************ 00:08:37.673 START TEST raid_function_test_concat 00:08:37.673 ************************************ 00:08:37.673 21:35:38 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1129 -- # raid_function_test concat 00:08:37.674 21:35:38 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@64 -- # local raid_level=concat 00:08:37.674 21:35:38 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:08:37.674 21:35:38 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:08:37.674 21:35:38 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@69 -- # raid_pid=60531 00:08:37.674 21:35:38 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:37.674 Process raid pid: 60531 00:08:37.674 21:35:38 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 60531' 00:08:37.674 21:35:38 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@71 -- # waitforlisten 60531 00:08:37.674 21:35:38 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@835 -- # '[' -z 60531 ']' 00:08:37.674 21:35:38 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:37.674 21:35:38 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:37.674 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:37.674 21:35:38 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:37.674 21:35:38 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:37.674 21:35:38 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:08:37.674 [2024-12-10 21:35:38.313805] Starting SPDK v25.01-pre git sha1 cec5ba284 / DPDK 24.03.0 initialization... 00:08:37.674 [2024-12-10 21:35:38.313932] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:37.933 [2024-12-10 21:35:38.494233] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:37.933 [2024-12-10 21:35:38.617635] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:38.191 [2024-12-10 21:35:38.836113] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:38.191 [2024-12-10 21:35:38.836159] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:38.450 21:35:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:38.450 21:35:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@868 -- # return 0 00:08:38.450 21:35:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:08:38.450 21:35:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.450 21:35:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:08:38.450 Base_1 00:08:38.450 21:35:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.450 21:35:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:08:38.450 21:35:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.450 21:35:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:08:38.709 Base_2 00:08:38.709 21:35:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.709 21:35:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''Base_1 Base_2'\''' -n raid 00:08:38.709 21:35:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.709 21:35:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:08:38.709 [2024-12-10 21:35:39.268261] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:08:38.709 [2024-12-10 21:35:39.270233] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:08:38.709 [2024-12-10 21:35:39.270308] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:08:38.709 [2024-12-10 21:35:39.270320] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:08:38.709 [2024-12-10 21:35:39.270626] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:38.709 [2024-12-10 21:35:39.270792] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:08:38.709 [2024-12-10 21:35:39.270801] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007780 00:08:38.709 [2024-12-10 21:35:39.270989] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:38.709 21:35:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.709 21:35:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:08:38.709 21:35:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:08:38.709 21:35:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.709 21:35:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:08:38.709 21:35:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.709 21:35:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:08:38.709 21:35:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:08:38.709 21:35:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:08:38.709 21:35:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:08:38.709 21:35:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:08:38.709 21:35:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:38.709 21:35:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:08:38.709 21:35:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:38.709 21:35:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@12 -- # local i 00:08:38.709 21:35:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:38.709 21:35:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:08:38.709 21:35:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:08:38.968 [2024-12-10 21:35:39.519903] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:08:38.968 /dev/nbd0 00:08:38.968 21:35:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:38.968 21:35:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:38.968 21:35:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:08:38.968 21:35:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@873 -- # local i 00:08:38.968 21:35:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:38.968 21:35:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:38.968 21:35:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:08:38.968 21:35:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@877 -- # break 00:08:38.968 21:35:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:38.968 21:35:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:38.968 21:35:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:38.968 1+0 records in 00:08:38.968 1+0 records out 00:08:38.968 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000399899 s, 10.2 MB/s 00:08:38.968 21:35:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:38.968 21:35:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@890 -- # size=4096 00:08:38.968 21:35:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:38.968 21:35:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:38.968 21:35:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@893 -- # return 0 00:08:38.968 21:35:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:38.968 21:35:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:08:38.968 21:35:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:08:38.968 21:35:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:08:38.968 21:35:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:08:39.227 21:35:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:39.227 { 00:08:39.227 "nbd_device": "/dev/nbd0", 00:08:39.227 "bdev_name": "raid" 00:08:39.227 } 00:08:39.227 ]' 00:08:39.227 21:35:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:39.227 21:35:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:39.227 { 00:08:39.227 "nbd_device": "/dev/nbd0", 00:08:39.227 "bdev_name": "raid" 00:08:39.227 } 00:08:39.227 ]' 00:08:39.227 21:35:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:08:39.227 21:35:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:08:39.227 21:35:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:39.227 21:35:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=1 00:08:39.227 21:35:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 1 00:08:39.227 21:35:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # count=1 00:08:39.227 21:35:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:08:39.227 21:35:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:08:39.227 21:35:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:08:39.227 21:35:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:08:39.227 21:35:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@19 -- # local blksize 00:08:39.227 21:35:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:08:39.227 21:35:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:08:39.227 21:35:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:08:39.227 21:35:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # blksize=512 00:08:39.227 21:35:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:08:39.227 21:35:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:08:39.227 21:35:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:08:39.227 21:35:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:08:39.227 21:35:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:08:39.227 21:35:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:08:39.227 21:35:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:08:39.227 21:35:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:08:39.227 21:35:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:08:39.227 4096+0 records in 00:08:39.227 4096+0 records out 00:08:39.227 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0322418 s, 65.0 MB/s 00:08:39.227 21:35:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:08:39.486 4096+0 records in 00:08:39.486 4096+0 records out 00:08:39.486 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.22974 s, 9.1 MB/s 00:08:39.486 21:35:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:08:39.486 21:35:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:08:39.486 21:35:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:08:39.486 21:35:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:08:39.486 21:35:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:08:39.486 21:35:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:08:39.486 21:35:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:08:39.486 128+0 records in 00:08:39.486 128+0 records out 00:08:39.486 65536 bytes (66 kB, 64 KiB) copied, 0.00122706 s, 53.4 MB/s 00:08:39.486 21:35:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:08:39.486 21:35:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:08:39.486 21:35:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:08:39.486 21:35:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:08:39.486 21:35:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:08:39.486 21:35:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:08:39.486 21:35:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:08:39.486 21:35:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:08:39.486 2035+0 records in 00:08:39.487 2035+0 records out 00:08:39.487 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0128651 s, 81.0 MB/s 00:08:39.487 21:35:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:08:39.487 21:35:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:08:39.487 21:35:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:08:39.487 21:35:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:08:39.487 21:35:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:08:39.487 21:35:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:08:39.487 21:35:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:08:39.487 21:35:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:08:39.487 456+0 records in 00:08:39.487 456+0 records out 00:08:39.487 233472 bytes (233 kB, 228 KiB) copied, 0.00368893 s, 63.3 MB/s 00:08:39.487 21:35:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:08:39.487 21:35:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:08:39.487 21:35:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:08:39.745 21:35:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:08:39.745 21:35:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:08:39.745 21:35:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@52 -- # return 0 00:08:39.745 21:35:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:08:39.745 21:35:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:08:39.745 21:35:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:08:39.745 21:35:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:39.745 21:35:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@51 -- # local i 00:08:39.745 21:35:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:39.745 21:35:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:08:39.745 21:35:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:39.745 [2024-12-10 21:35:40.510840] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:39.745 21:35:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:39.745 21:35:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:39.745 21:35:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:39.745 21:35:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:39.745 21:35:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:39.745 21:35:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@41 -- # break 00:08:39.745 21:35:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@45 -- # return 0 00:08:39.745 21:35:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:08:39.745 21:35:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:08:39.745 21:35:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:08:40.004 21:35:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:40.004 21:35:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:40.004 21:35:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:40.263 21:35:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:40.263 21:35:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo '' 00:08:40.263 21:35:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:40.263 21:35:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # true 00:08:40.263 21:35:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=0 00:08:40.263 21:35:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 0 00:08:40.263 21:35:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # count=0 00:08:40.263 21:35:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:08:40.263 21:35:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@97 -- # killprocess 60531 00:08:40.263 21:35:40 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@954 -- # '[' -z 60531 ']' 00:08:40.263 21:35:40 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@958 -- # kill -0 60531 00:08:40.263 21:35:40 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@959 -- # uname 00:08:40.263 21:35:40 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:40.263 21:35:40 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60531 00:08:40.263 killing process with pid 60531 00:08:40.263 21:35:40 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:40.263 21:35:40 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:40.263 21:35:40 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60531' 00:08:40.263 21:35:40 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@973 -- # kill 60531 00:08:40.263 [2024-12-10 21:35:40.851845] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:40.263 [2024-12-10 21:35:40.851946] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:40.263 [2024-12-10 21:35:40.852001] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:40.263 [2024-12-10 21:35:40.852014] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid, state offline 00:08:40.263 21:35:40 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@978 -- # wait 60531 00:08:40.522 [2024-12-10 21:35:41.062890] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:41.923 21:35:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@99 -- # return 0 00:08:41.923 00:08:41.923 real 0m4.055s 00:08:41.923 user 0m4.777s 00:08:41.923 sys 0m0.928s 00:08:41.923 ************************************ 00:08:41.923 END TEST raid_function_test_concat 00:08:41.923 ************************************ 00:08:41.923 21:35:42 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:41.923 21:35:42 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:08:41.923 21:35:42 bdev_raid -- bdev/bdev_raid.sh@963 -- # run_test raid0_resize_test raid_resize_test 0 00:08:41.923 21:35:42 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:41.923 21:35:42 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:41.923 21:35:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:41.923 ************************************ 00:08:41.923 START TEST raid0_resize_test 00:08:41.923 ************************************ 00:08:41.923 21:35:42 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1129 -- # raid_resize_test 0 00:08:41.923 21:35:42 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=0 00:08:41.923 21:35:42 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:08:41.923 21:35:42 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:08:41.923 21:35:42 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:08:41.923 21:35:42 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:08:41.923 21:35:42 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:08:41.923 21:35:42 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:08:41.923 21:35:42 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:08:41.923 21:35:42 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=60660 00:08:41.924 21:35:42 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:41.924 21:35:42 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 60660' 00:08:41.924 Process raid pid: 60660 00:08:41.924 21:35:42 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 60660 00:08:41.924 21:35:42 bdev_raid.raid0_resize_test -- common/autotest_common.sh@835 -- # '[' -z 60660 ']' 00:08:41.924 21:35:42 bdev_raid.raid0_resize_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:41.924 21:35:42 bdev_raid.raid0_resize_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:41.924 21:35:42 bdev_raid.raid0_resize_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:41.924 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:41.924 21:35:42 bdev_raid.raid0_resize_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:41.924 21:35:42 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.924 [2024-12-10 21:35:42.434393] Starting SPDK v25.01-pre git sha1 cec5ba284 / DPDK 24.03.0 initialization... 00:08:41.924 [2024-12-10 21:35:42.434617] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:41.924 [2024-12-10 21:35:42.589737] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:42.193 [2024-12-10 21:35:42.716053] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:42.193 [2024-12-10 21:35:42.940266] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:42.193 [2024-12-10 21:35:42.940413] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:42.761 21:35:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:42.761 21:35:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@868 -- # return 0 00:08:42.761 21:35:43 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:08:42.761 21:35:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.761 21:35:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.761 Base_1 00:08:42.761 21:35:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.761 21:35:43 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:08:42.761 21:35:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.761 21:35:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.761 Base_2 00:08:42.761 21:35:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.761 21:35:43 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 0 -eq 0 ']' 00:08:42.761 21:35:43 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@350 -- # rpc_cmd bdev_raid_create -z 64 -r 0 -b ''\''Base_1 Base_2'\''' -n Raid 00:08:42.761 21:35:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.761 21:35:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.761 [2024-12-10 21:35:43.341890] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:08:42.761 [2024-12-10 21:35:43.344114] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:08:42.761 [2024-12-10 21:35:43.344180] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:08:42.761 [2024-12-10 21:35:43.344192] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:08:42.761 [2024-12-10 21:35:43.344509] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:08:42.761 [2024-12-10 21:35:43.344649] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:08:42.761 [2024-12-10 21:35:43.344681] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:08:42.761 [2024-12-10 21:35:43.344877] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:42.761 21:35:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.761 21:35:43 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:08:42.761 21:35:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.761 21:35:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.761 [2024-12-10 21:35:43.353831] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:08:42.761 [2024-12-10 21:35:43.353899] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:08:42.761 true 00:08:42.762 21:35:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.762 21:35:43 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:08:42.762 21:35:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.762 21:35:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.762 21:35:43 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:08:42.762 [2024-12-10 21:35:43.365981] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:42.762 21:35:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.762 21:35:43 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=131072 00:08:42.762 21:35:43 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=64 00:08:42.762 21:35:43 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 0 -eq 0 ']' 00:08:42.762 21:35:43 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@362 -- # expected_size=64 00:08:42.762 21:35:43 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 64 '!=' 64 ']' 00:08:42.762 21:35:43 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:08:42.762 21:35:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.762 21:35:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.762 [2024-12-10 21:35:43.421761] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:08:42.762 [2024-12-10 21:35:43.421854] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:08:42.762 [2024-12-10 21:35:43.421894] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 131072 to 262144 00:08:42.762 true 00:08:42.762 21:35:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.762 21:35:43 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:08:42.762 21:35:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.762 21:35:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.762 21:35:43 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:08:42.762 [2024-12-10 21:35:43.433898] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:42.762 21:35:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.762 21:35:43 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=262144 00:08:42.762 21:35:43 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=128 00:08:42.762 21:35:43 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 0 -eq 0 ']' 00:08:42.762 21:35:43 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@378 -- # expected_size=128 00:08:42.762 21:35:43 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 128 '!=' 128 ']' 00:08:42.762 21:35:43 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 60660 00:08:42.762 21:35:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@954 -- # '[' -z 60660 ']' 00:08:42.762 21:35:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@958 -- # kill -0 60660 00:08:42.762 21:35:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@959 -- # uname 00:08:42.762 21:35:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:42.762 21:35:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60660 00:08:42.762 21:35:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:42.762 21:35:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:42.762 21:35:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60660' 00:08:42.762 killing process with pid 60660 00:08:42.762 21:35:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@973 -- # kill 60660 00:08:42.762 [2024-12-10 21:35:43.504210] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:42.762 [2024-12-10 21:35:43.504386] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:42.762 21:35:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@978 -- # wait 60660 00:08:42.762 [2024-12-10 21:35:43.504503] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:42.762 [2024-12-10 21:35:43.504518] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:08:42.762 [2024-12-10 21:35:43.523677] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:44.139 21:35:44 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:08:44.139 00:08:44.139 real 0m2.365s 00:08:44.139 user 0m2.536s 00:08:44.139 sys 0m0.320s 00:08:44.139 21:35:44 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:44.139 21:35:44 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.139 ************************************ 00:08:44.139 END TEST raid0_resize_test 00:08:44.139 ************************************ 00:08:44.139 21:35:44 bdev_raid -- bdev/bdev_raid.sh@964 -- # run_test raid1_resize_test raid_resize_test 1 00:08:44.139 21:35:44 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:44.139 21:35:44 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:44.139 21:35:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:44.139 ************************************ 00:08:44.139 START TEST raid1_resize_test 00:08:44.139 ************************************ 00:08:44.139 21:35:44 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1129 -- # raid_resize_test 1 00:08:44.139 21:35:44 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=1 00:08:44.139 21:35:44 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:08:44.139 21:35:44 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:08:44.139 21:35:44 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:08:44.139 21:35:44 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:08:44.139 21:35:44 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:08:44.139 21:35:44 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:08:44.139 21:35:44 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:08:44.139 Process raid pid: 60721 00:08:44.139 21:35:44 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=60721 00:08:44.139 21:35:44 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:44.139 21:35:44 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 60721' 00:08:44.139 21:35:44 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 60721 00:08:44.139 21:35:44 bdev_raid.raid1_resize_test -- common/autotest_common.sh@835 -- # '[' -z 60721 ']' 00:08:44.139 21:35:44 bdev_raid.raid1_resize_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:44.139 21:35:44 bdev_raid.raid1_resize_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:44.139 21:35:44 bdev_raid.raid1_resize_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:44.139 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:44.139 21:35:44 bdev_raid.raid1_resize_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:44.139 21:35:44 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.139 [2024-12-10 21:35:44.872287] Starting SPDK v25.01-pre git sha1 cec5ba284 / DPDK 24.03.0 initialization... 00:08:44.139 [2024-12-10 21:35:44.872410] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:44.397 [2024-12-10 21:35:45.050310] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:44.397 [2024-12-10 21:35:45.176262] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:44.656 [2024-12-10 21:35:45.400014] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:44.656 [2024-12-10 21:35:45.400060] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:45.225 21:35:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:45.225 21:35:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@868 -- # return 0 00:08:45.225 21:35:45 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:08:45.225 21:35:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.225 21:35:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.225 Base_1 00:08:45.225 21:35:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.225 21:35:45 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:08:45.225 21:35:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.225 21:35:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.225 Base_2 00:08:45.225 21:35:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.225 21:35:45 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 1 -eq 0 ']' 00:08:45.225 21:35:45 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@352 -- # rpc_cmd bdev_raid_create -r 1 -b ''\''Base_1 Base_2'\''' -n Raid 00:08:45.225 21:35:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.225 21:35:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.225 [2024-12-10 21:35:45.746791] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:08:45.225 [2024-12-10 21:35:45.748585] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:08:45.225 [2024-12-10 21:35:45.748640] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:08:45.225 [2024-12-10 21:35:45.748651] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:08:45.225 [2024-12-10 21:35:45.748905] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:08:45.225 [2024-12-10 21:35:45.749026] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:08:45.225 [2024-12-10 21:35:45.749033] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:08:45.225 [2024-12-10 21:35:45.749168] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:45.225 21:35:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.225 21:35:45 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:08:45.225 21:35:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.225 21:35:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.225 [2024-12-10 21:35:45.758790] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:08:45.225 [2024-12-10 21:35:45.758862] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:08:45.225 true 00:08:45.225 21:35:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.225 21:35:45 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:08:45.225 21:35:45 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:08:45.225 21:35:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.225 21:35:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.225 [2024-12-10 21:35:45.774944] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:45.225 21:35:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.225 21:35:45 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=65536 00:08:45.225 21:35:45 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=32 00:08:45.225 21:35:45 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 1 -eq 0 ']' 00:08:45.225 21:35:45 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@364 -- # expected_size=32 00:08:45.225 21:35:45 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 32 '!=' 32 ']' 00:08:45.225 21:35:45 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:08:45.225 21:35:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.225 21:35:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.225 [2024-12-10 21:35:45.822698] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:08:45.225 [2024-12-10 21:35:45.822773] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:08:45.225 [2024-12-10 21:35:45.822859] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 65536 to 131072 00:08:45.225 true 00:08:45.225 21:35:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.225 21:35:45 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:08:45.225 21:35:45 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:08:45.225 21:35:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.225 21:35:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.225 [2024-12-10 21:35:45.838845] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:45.225 21:35:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.225 21:35:45 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=131072 00:08:45.225 21:35:45 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=64 00:08:45.225 21:35:45 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 1 -eq 0 ']' 00:08:45.225 21:35:45 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@380 -- # expected_size=64 00:08:45.225 21:35:45 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 64 '!=' 64 ']' 00:08:45.225 21:35:45 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 60721 00:08:45.225 21:35:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@954 -- # '[' -z 60721 ']' 00:08:45.225 21:35:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@958 -- # kill -0 60721 00:08:45.225 21:35:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@959 -- # uname 00:08:45.225 21:35:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:45.225 21:35:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60721 00:08:45.225 21:35:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:45.225 21:35:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:45.225 21:35:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60721' 00:08:45.225 killing process with pid 60721 00:08:45.226 21:35:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@973 -- # kill 60721 00:08:45.226 [2024-12-10 21:35:45.921779] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:45.226 [2024-12-10 21:35:45.921917] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:45.226 21:35:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@978 -- # wait 60721 00:08:45.226 [2024-12-10 21:35:45.922435] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:45.226 [2024-12-10 21:35:45.922505] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:08:45.226 [2024-12-10 21:35:45.940461] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:46.602 21:35:47 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:08:46.602 00:08:46.602 real 0m2.356s 00:08:46.602 user 0m2.510s 00:08:46.602 sys 0m0.345s 00:08:46.602 21:35:47 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:46.602 21:35:47 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.602 ************************************ 00:08:46.602 END TEST raid1_resize_test 00:08:46.602 ************************************ 00:08:46.602 21:35:47 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:08:46.602 21:35:47 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:08:46.602 21:35:47 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 2 false 00:08:46.602 21:35:47 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:46.602 21:35:47 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:46.602 21:35:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:46.602 ************************************ 00:08:46.602 START TEST raid_state_function_test 00:08:46.602 ************************************ 00:08:46.602 21:35:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 2 false 00:08:46.602 21:35:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:08:46.602 21:35:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:08:46.602 21:35:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:08:46.602 21:35:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:46.603 21:35:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:46.603 21:35:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:46.603 21:35:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:46.603 21:35:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:46.603 21:35:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:46.603 21:35:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:46.603 21:35:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:46.603 21:35:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:46.603 21:35:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:46.603 21:35:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:46.603 21:35:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:46.603 21:35:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:46.603 21:35:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:46.603 21:35:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:46.603 21:35:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:08:46.603 21:35:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:46.603 21:35:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:46.603 21:35:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:08:46.603 21:35:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:08:46.603 21:35:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=60778 00:08:46.603 21:35:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:46.603 21:35:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 60778' 00:08:46.603 Process raid pid: 60778 00:08:46.603 21:35:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 60778 00:08:46.603 21:35:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 60778 ']' 00:08:46.603 21:35:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:46.603 21:35:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:46.603 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:46.603 21:35:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:46.603 21:35:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:46.603 21:35:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.603 [2024-12-10 21:35:47.313181] Starting SPDK v25.01-pre git sha1 cec5ba284 / DPDK 24.03.0 initialization... 00:08:46.603 [2024-12-10 21:35:47.313327] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:46.865 [2024-12-10 21:35:47.472683] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:46.865 [2024-12-10 21:35:47.597576] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:47.123 [2024-12-10 21:35:47.830610] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:47.123 [2024-12-10 21:35:47.830663] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:47.692 21:35:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:47.692 21:35:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:08:47.692 21:35:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:47.692 21:35:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.692 21:35:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.692 [2024-12-10 21:35:48.187706] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:47.692 [2024-12-10 21:35:48.187767] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:47.692 [2024-12-10 21:35:48.187779] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:47.692 [2024-12-10 21:35:48.187790] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:47.692 21:35:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.692 21:35:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:08:47.692 21:35:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:47.692 21:35:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:47.692 21:35:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:47.692 21:35:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:47.692 21:35:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:47.692 21:35:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:47.692 21:35:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:47.692 21:35:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:47.692 21:35:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:47.692 21:35:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:47.692 21:35:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:47.692 21:35:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.692 21:35:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.692 21:35:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.692 21:35:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:47.692 "name": "Existed_Raid", 00:08:47.692 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:47.692 "strip_size_kb": 64, 00:08:47.692 "state": "configuring", 00:08:47.692 "raid_level": "raid0", 00:08:47.692 "superblock": false, 00:08:47.692 "num_base_bdevs": 2, 00:08:47.692 "num_base_bdevs_discovered": 0, 00:08:47.692 "num_base_bdevs_operational": 2, 00:08:47.692 "base_bdevs_list": [ 00:08:47.692 { 00:08:47.692 "name": "BaseBdev1", 00:08:47.692 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:47.692 "is_configured": false, 00:08:47.692 "data_offset": 0, 00:08:47.692 "data_size": 0 00:08:47.692 }, 00:08:47.692 { 00:08:47.692 "name": "BaseBdev2", 00:08:47.692 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:47.692 "is_configured": false, 00:08:47.692 "data_offset": 0, 00:08:47.692 "data_size": 0 00:08:47.692 } 00:08:47.692 ] 00:08:47.692 }' 00:08:47.692 21:35:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:47.692 21:35:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.951 21:35:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:47.951 21:35:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.951 21:35:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.951 [2024-12-10 21:35:48.666800] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:47.951 [2024-12-10 21:35:48.666895] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:47.951 21:35:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.951 21:35:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:47.951 21:35:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.951 21:35:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.951 [2024-12-10 21:35:48.674781] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:47.951 [2024-12-10 21:35:48.674856] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:47.951 [2024-12-10 21:35:48.674902] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:47.951 [2024-12-10 21:35:48.674931] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:47.951 21:35:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.951 21:35:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:47.951 21:35:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.951 21:35:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.951 [2024-12-10 21:35:48.726070] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:47.951 BaseBdev1 00:08:47.951 21:35:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.951 21:35:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:47.951 21:35:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:47.951 21:35:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:47.952 21:35:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:47.952 21:35:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:47.952 21:35:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:47.952 21:35:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:47.952 21:35:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.952 21:35:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.212 21:35:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.212 21:35:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:48.212 21:35:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.212 21:35:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.212 [ 00:08:48.212 { 00:08:48.212 "name": "BaseBdev1", 00:08:48.212 "aliases": [ 00:08:48.212 "d6c7fe54-69e1-4254-9d35-5ba70a273af7" 00:08:48.212 ], 00:08:48.212 "product_name": "Malloc disk", 00:08:48.212 "block_size": 512, 00:08:48.212 "num_blocks": 65536, 00:08:48.212 "uuid": "d6c7fe54-69e1-4254-9d35-5ba70a273af7", 00:08:48.212 "assigned_rate_limits": { 00:08:48.212 "rw_ios_per_sec": 0, 00:08:48.212 "rw_mbytes_per_sec": 0, 00:08:48.212 "r_mbytes_per_sec": 0, 00:08:48.212 "w_mbytes_per_sec": 0 00:08:48.212 }, 00:08:48.212 "claimed": true, 00:08:48.212 "claim_type": "exclusive_write", 00:08:48.212 "zoned": false, 00:08:48.212 "supported_io_types": { 00:08:48.212 "read": true, 00:08:48.212 "write": true, 00:08:48.212 "unmap": true, 00:08:48.212 "flush": true, 00:08:48.212 "reset": true, 00:08:48.212 "nvme_admin": false, 00:08:48.212 "nvme_io": false, 00:08:48.212 "nvme_io_md": false, 00:08:48.212 "write_zeroes": true, 00:08:48.212 "zcopy": true, 00:08:48.212 "get_zone_info": false, 00:08:48.212 "zone_management": false, 00:08:48.212 "zone_append": false, 00:08:48.212 "compare": false, 00:08:48.212 "compare_and_write": false, 00:08:48.212 "abort": true, 00:08:48.212 "seek_hole": false, 00:08:48.212 "seek_data": false, 00:08:48.212 "copy": true, 00:08:48.212 "nvme_iov_md": false 00:08:48.212 }, 00:08:48.212 "memory_domains": [ 00:08:48.212 { 00:08:48.212 "dma_device_id": "system", 00:08:48.212 "dma_device_type": 1 00:08:48.212 }, 00:08:48.212 { 00:08:48.212 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:48.212 "dma_device_type": 2 00:08:48.212 } 00:08:48.212 ], 00:08:48.212 "driver_specific": {} 00:08:48.212 } 00:08:48.212 ] 00:08:48.212 21:35:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.212 21:35:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:48.212 21:35:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:08:48.212 21:35:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:48.212 21:35:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:48.212 21:35:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:48.212 21:35:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:48.212 21:35:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:48.212 21:35:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:48.212 21:35:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:48.212 21:35:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:48.212 21:35:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:48.212 21:35:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:48.212 21:35:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:48.212 21:35:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.212 21:35:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.212 21:35:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.212 21:35:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:48.212 "name": "Existed_Raid", 00:08:48.212 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:48.212 "strip_size_kb": 64, 00:08:48.212 "state": "configuring", 00:08:48.212 "raid_level": "raid0", 00:08:48.212 "superblock": false, 00:08:48.212 "num_base_bdevs": 2, 00:08:48.212 "num_base_bdevs_discovered": 1, 00:08:48.212 "num_base_bdevs_operational": 2, 00:08:48.212 "base_bdevs_list": [ 00:08:48.212 { 00:08:48.212 "name": "BaseBdev1", 00:08:48.212 "uuid": "d6c7fe54-69e1-4254-9d35-5ba70a273af7", 00:08:48.212 "is_configured": true, 00:08:48.212 "data_offset": 0, 00:08:48.212 "data_size": 65536 00:08:48.212 }, 00:08:48.212 { 00:08:48.212 "name": "BaseBdev2", 00:08:48.212 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:48.212 "is_configured": false, 00:08:48.212 "data_offset": 0, 00:08:48.212 "data_size": 0 00:08:48.212 } 00:08:48.212 ] 00:08:48.212 }' 00:08:48.212 21:35:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:48.212 21:35:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.473 21:35:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:48.473 21:35:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.473 21:35:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.473 [2024-12-10 21:35:49.189334] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:48.473 [2024-12-10 21:35:49.189389] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:48.473 21:35:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.473 21:35:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:48.473 21:35:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.473 21:35:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.473 [2024-12-10 21:35:49.201371] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:48.473 [2024-12-10 21:35:49.203513] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:48.473 [2024-12-10 21:35:49.203642] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:48.473 21:35:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.473 21:35:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:48.473 21:35:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:48.473 21:35:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:08:48.473 21:35:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:48.473 21:35:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:48.473 21:35:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:48.473 21:35:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:48.473 21:35:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:48.473 21:35:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:48.473 21:35:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:48.473 21:35:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:48.473 21:35:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:48.473 21:35:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:48.473 21:35:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.473 21:35:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.473 21:35:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:48.473 21:35:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.733 21:35:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:48.733 "name": "Existed_Raid", 00:08:48.733 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:48.733 "strip_size_kb": 64, 00:08:48.733 "state": "configuring", 00:08:48.733 "raid_level": "raid0", 00:08:48.733 "superblock": false, 00:08:48.733 "num_base_bdevs": 2, 00:08:48.733 "num_base_bdevs_discovered": 1, 00:08:48.733 "num_base_bdevs_operational": 2, 00:08:48.733 "base_bdevs_list": [ 00:08:48.733 { 00:08:48.733 "name": "BaseBdev1", 00:08:48.733 "uuid": "d6c7fe54-69e1-4254-9d35-5ba70a273af7", 00:08:48.733 "is_configured": true, 00:08:48.733 "data_offset": 0, 00:08:48.733 "data_size": 65536 00:08:48.733 }, 00:08:48.733 { 00:08:48.733 "name": "BaseBdev2", 00:08:48.733 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:48.733 "is_configured": false, 00:08:48.733 "data_offset": 0, 00:08:48.733 "data_size": 0 00:08:48.733 } 00:08:48.733 ] 00:08:48.733 }' 00:08:48.733 21:35:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:48.733 21:35:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.992 21:35:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:48.992 21:35:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.992 21:35:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.992 [2024-12-10 21:35:49.727444] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:48.992 [2024-12-10 21:35:49.727587] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:48.992 [2024-12-10 21:35:49.727614] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:08:48.992 [2024-12-10 21:35:49.727949] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:48.992 [2024-12-10 21:35:49.728199] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:48.992 [2024-12-10 21:35:49.728253] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:48.992 [2024-12-10 21:35:49.728597] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:48.992 BaseBdev2 00:08:48.992 21:35:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.992 21:35:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:48.992 21:35:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:48.992 21:35:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:48.992 21:35:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:48.992 21:35:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:48.992 21:35:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:48.992 21:35:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:48.992 21:35:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.992 21:35:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.992 21:35:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.992 21:35:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:48.992 21:35:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.992 21:35:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.992 [ 00:08:48.992 { 00:08:48.992 "name": "BaseBdev2", 00:08:48.992 "aliases": [ 00:08:48.992 "d29eecbc-46f6-41bd-9f7d-76f18dd8d651" 00:08:48.992 ], 00:08:48.992 "product_name": "Malloc disk", 00:08:48.992 "block_size": 512, 00:08:48.992 "num_blocks": 65536, 00:08:48.992 "uuid": "d29eecbc-46f6-41bd-9f7d-76f18dd8d651", 00:08:48.992 "assigned_rate_limits": { 00:08:48.992 "rw_ios_per_sec": 0, 00:08:48.992 "rw_mbytes_per_sec": 0, 00:08:48.992 "r_mbytes_per_sec": 0, 00:08:48.992 "w_mbytes_per_sec": 0 00:08:48.992 }, 00:08:48.992 "claimed": true, 00:08:48.992 "claim_type": "exclusive_write", 00:08:48.992 "zoned": false, 00:08:48.992 "supported_io_types": { 00:08:48.992 "read": true, 00:08:48.992 "write": true, 00:08:48.992 "unmap": true, 00:08:48.992 "flush": true, 00:08:48.992 "reset": true, 00:08:48.992 "nvme_admin": false, 00:08:48.992 "nvme_io": false, 00:08:48.992 "nvme_io_md": false, 00:08:48.992 "write_zeroes": true, 00:08:48.992 "zcopy": true, 00:08:48.992 "get_zone_info": false, 00:08:48.992 "zone_management": false, 00:08:48.992 "zone_append": false, 00:08:48.992 "compare": false, 00:08:48.992 "compare_and_write": false, 00:08:48.992 "abort": true, 00:08:48.992 "seek_hole": false, 00:08:48.992 "seek_data": false, 00:08:48.992 "copy": true, 00:08:48.992 "nvme_iov_md": false 00:08:48.992 }, 00:08:48.992 "memory_domains": [ 00:08:48.992 { 00:08:48.992 "dma_device_id": "system", 00:08:48.992 "dma_device_type": 1 00:08:48.992 }, 00:08:48.992 { 00:08:48.992 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:48.992 "dma_device_type": 2 00:08:48.992 } 00:08:48.992 ], 00:08:48.992 "driver_specific": {} 00:08:48.992 } 00:08:48.992 ] 00:08:48.992 21:35:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.992 21:35:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:48.992 21:35:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:48.992 21:35:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:48.992 21:35:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:08:48.992 21:35:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:48.992 21:35:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:48.992 21:35:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:48.992 21:35:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:48.992 21:35:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:48.992 21:35:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:48.992 21:35:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:48.992 21:35:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:48.992 21:35:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:48.992 21:35:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:48.992 21:35:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:48.992 21:35:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.992 21:35:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.252 21:35:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.252 21:35:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:49.252 "name": "Existed_Raid", 00:08:49.252 "uuid": "e6447676-e263-41a0-ad99-fec9656fcc18", 00:08:49.252 "strip_size_kb": 64, 00:08:49.252 "state": "online", 00:08:49.252 "raid_level": "raid0", 00:08:49.252 "superblock": false, 00:08:49.252 "num_base_bdevs": 2, 00:08:49.252 "num_base_bdevs_discovered": 2, 00:08:49.252 "num_base_bdevs_operational": 2, 00:08:49.252 "base_bdevs_list": [ 00:08:49.252 { 00:08:49.252 "name": "BaseBdev1", 00:08:49.252 "uuid": "d6c7fe54-69e1-4254-9d35-5ba70a273af7", 00:08:49.252 "is_configured": true, 00:08:49.252 "data_offset": 0, 00:08:49.252 "data_size": 65536 00:08:49.252 }, 00:08:49.252 { 00:08:49.252 "name": "BaseBdev2", 00:08:49.252 "uuid": "d29eecbc-46f6-41bd-9f7d-76f18dd8d651", 00:08:49.252 "is_configured": true, 00:08:49.252 "data_offset": 0, 00:08:49.252 "data_size": 65536 00:08:49.252 } 00:08:49.252 ] 00:08:49.252 }' 00:08:49.252 21:35:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:49.252 21:35:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.518 21:35:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:49.518 21:35:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:49.518 21:35:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:49.518 21:35:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:49.518 21:35:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:49.518 21:35:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:49.518 21:35:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:49.518 21:35:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.518 21:35:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.518 21:35:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:49.518 [2024-12-10 21:35:50.202935] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:49.518 21:35:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.518 21:35:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:49.518 "name": "Existed_Raid", 00:08:49.518 "aliases": [ 00:08:49.518 "e6447676-e263-41a0-ad99-fec9656fcc18" 00:08:49.518 ], 00:08:49.518 "product_name": "Raid Volume", 00:08:49.518 "block_size": 512, 00:08:49.518 "num_blocks": 131072, 00:08:49.518 "uuid": "e6447676-e263-41a0-ad99-fec9656fcc18", 00:08:49.518 "assigned_rate_limits": { 00:08:49.518 "rw_ios_per_sec": 0, 00:08:49.518 "rw_mbytes_per_sec": 0, 00:08:49.518 "r_mbytes_per_sec": 0, 00:08:49.518 "w_mbytes_per_sec": 0 00:08:49.518 }, 00:08:49.518 "claimed": false, 00:08:49.518 "zoned": false, 00:08:49.518 "supported_io_types": { 00:08:49.518 "read": true, 00:08:49.518 "write": true, 00:08:49.518 "unmap": true, 00:08:49.518 "flush": true, 00:08:49.518 "reset": true, 00:08:49.518 "nvme_admin": false, 00:08:49.518 "nvme_io": false, 00:08:49.518 "nvme_io_md": false, 00:08:49.518 "write_zeroes": true, 00:08:49.518 "zcopy": false, 00:08:49.518 "get_zone_info": false, 00:08:49.518 "zone_management": false, 00:08:49.518 "zone_append": false, 00:08:49.518 "compare": false, 00:08:49.518 "compare_and_write": false, 00:08:49.518 "abort": false, 00:08:49.518 "seek_hole": false, 00:08:49.518 "seek_data": false, 00:08:49.518 "copy": false, 00:08:49.518 "nvme_iov_md": false 00:08:49.518 }, 00:08:49.518 "memory_domains": [ 00:08:49.518 { 00:08:49.518 "dma_device_id": "system", 00:08:49.518 "dma_device_type": 1 00:08:49.518 }, 00:08:49.518 { 00:08:49.518 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:49.518 "dma_device_type": 2 00:08:49.518 }, 00:08:49.518 { 00:08:49.518 "dma_device_id": "system", 00:08:49.518 "dma_device_type": 1 00:08:49.518 }, 00:08:49.518 { 00:08:49.518 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:49.518 "dma_device_type": 2 00:08:49.518 } 00:08:49.518 ], 00:08:49.518 "driver_specific": { 00:08:49.518 "raid": { 00:08:49.518 "uuid": "e6447676-e263-41a0-ad99-fec9656fcc18", 00:08:49.518 "strip_size_kb": 64, 00:08:49.518 "state": "online", 00:08:49.518 "raid_level": "raid0", 00:08:49.518 "superblock": false, 00:08:49.518 "num_base_bdevs": 2, 00:08:49.518 "num_base_bdevs_discovered": 2, 00:08:49.518 "num_base_bdevs_operational": 2, 00:08:49.518 "base_bdevs_list": [ 00:08:49.518 { 00:08:49.518 "name": "BaseBdev1", 00:08:49.518 "uuid": "d6c7fe54-69e1-4254-9d35-5ba70a273af7", 00:08:49.518 "is_configured": true, 00:08:49.518 "data_offset": 0, 00:08:49.518 "data_size": 65536 00:08:49.518 }, 00:08:49.518 { 00:08:49.518 "name": "BaseBdev2", 00:08:49.518 "uuid": "d29eecbc-46f6-41bd-9f7d-76f18dd8d651", 00:08:49.518 "is_configured": true, 00:08:49.518 "data_offset": 0, 00:08:49.518 "data_size": 65536 00:08:49.518 } 00:08:49.518 ] 00:08:49.518 } 00:08:49.518 } 00:08:49.518 }' 00:08:49.518 21:35:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:49.518 21:35:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:49.518 BaseBdev2' 00:08:49.518 21:35:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:49.778 21:35:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:49.778 21:35:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:49.778 21:35:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:49.778 21:35:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:49.778 21:35:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.778 21:35:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.778 21:35:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.778 21:35:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:49.778 21:35:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:49.778 21:35:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:49.778 21:35:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:49.778 21:35:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.778 21:35:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.778 21:35:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:49.778 21:35:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.778 21:35:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:49.778 21:35:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:49.778 21:35:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:49.778 21:35:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.778 21:35:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.778 [2024-12-10 21:35:50.434374] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:49.778 [2024-12-10 21:35:50.434504] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:49.778 [2024-12-10 21:35:50.434607] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:49.778 21:35:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.778 21:35:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:49.778 21:35:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:08:49.778 21:35:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:49.779 21:35:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:49.779 21:35:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:49.779 21:35:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:08:49.779 21:35:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:49.779 21:35:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:49.779 21:35:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:49.779 21:35:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:49.779 21:35:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:49.779 21:35:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:49.779 21:35:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:49.779 21:35:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:49.779 21:35:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:49.779 21:35:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:49.779 21:35:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.779 21:35:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:49.779 21:35:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.038 21:35:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.038 21:35:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:50.038 "name": "Existed_Raid", 00:08:50.038 "uuid": "e6447676-e263-41a0-ad99-fec9656fcc18", 00:08:50.038 "strip_size_kb": 64, 00:08:50.038 "state": "offline", 00:08:50.038 "raid_level": "raid0", 00:08:50.038 "superblock": false, 00:08:50.038 "num_base_bdevs": 2, 00:08:50.038 "num_base_bdevs_discovered": 1, 00:08:50.038 "num_base_bdevs_operational": 1, 00:08:50.038 "base_bdevs_list": [ 00:08:50.038 { 00:08:50.038 "name": null, 00:08:50.038 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:50.038 "is_configured": false, 00:08:50.038 "data_offset": 0, 00:08:50.038 "data_size": 65536 00:08:50.038 }, 00:08:50.038 { 00:08:50.038 "name": "BaseBdev2", 00:08:50.038 "uuid": "d29eecbc-46f6-41bd-9f7d-76f18dd8d651", 00:08:50.038 "is_configured": true, 00:08:50.038 "data_offset": 0, 00:08:50.039 "data_size": 65536 00:08:50.039 } 00:08:50.039 ] 00:08:50.039 }' 00:08:50.039 21:35:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:50.039 21:35:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.298 21:35:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:50.298 21:35:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:50.298 21:35:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:50.298 21:35:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:50.298 21:35:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.298 21:35:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.298 21:35:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.298 21:35:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:50.298 21:35:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:50.298 21:35:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:50.298 21:35:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.298 21:35:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.298 [2024-12-10 21:35:51.059791] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:50.298 [2024-12-10 21:35:51.059936] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:50.558 21:35:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.558 21:35:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:50.558 21:35:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:50.558 21:35:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:50.558 21:35:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.558 21:35:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.558 21:35:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:50.558 21:35:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.558 21:35:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:50.558 21:35:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:50.558 21:35:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:08:50.558 21:35:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 60778 00:08:50.558 21:35:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 60778 ']' 00:08:50.558 21:35:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 60778 00:08:50.558 21:35:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:08:50.558 21:35:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:50.558 21:35:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60778 00:08:50.558 21:35:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:50.558 killing process with pid 60778 00:08:50.558 21:35:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:50.558 21:35:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60778' 00:08:50.558 21:35:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 60778 00:08:50.558 [2024-12-10 21:35:51.252331] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:50.558 21:35:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 60778 00:08:50.558 [2024-12-10 21:35:51.271023] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:51.937 ************************************ 00:08:51.937 END TEST raid_state_function_test 00:08:51.937 ************************************ 00:08:51.937 21:35:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:08:51.937 00:08:51.937 real 0m5.325s 00:08:51.937 user 0m7.650s 00:08:51.937 sys 0m0.840s 00:08:51.937 21:35:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:51.937 21:35:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.937 21:35:52 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 2 true 00:08:51.937 21:35:52 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:51.937 21:35:52 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:51.937 21:35:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:51.937 ************************************ 00:08:51.937 START TEST raid_state_function_test_sb 00:08:51.937 ************************************ 00:08:51.937 21:35:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 2 true 00:08:51.937 21:35:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:08:51.937 21:35:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:08:51.937 21:35:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:08:51.937 21:35:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:51.937 21:35:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:51.937 21:35:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:51.937 21:35:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:51.937 21:35:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:51.937 21:35:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:51.937 21:35:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:51.937 21:35:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:51.937 21:35:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:51.937 21:35:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:51.937 21:35:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:51.937 21:35:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:51.937 21:35:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:51.937 21:35:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:51.937 21:35:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:51.937 21:35:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:08:51.937 21:35:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:51.937 21:35:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:51.937 21:35:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:08:51.937 21:35:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:08:51.937 21:35:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=61037 00:08:51.938 21:35:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:51.938 Process raid pid: 61037 00:08:51.938 21:35:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 61037' 00:08:51.938 21:35:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 61037 00:08:51.938 21:35:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 61037 ']' 00:08:51.938 21:35:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:51.938 21:35:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:51.938 21:35:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:51.938 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:51.938 21:35:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:51.938 21:35:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:51.938 [2024-12-10 21:35:52.699129] Starting SPDK v25.01-pre git sha1 cec5ba284 / DPDK 24.03.0 initialization... 00:08:51.938 [2024-12-10 21:35:52.699349] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:52.198 [2024-12-10 21:35:52.874756] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:52.458 [2024-12-10 21:35:52.997277] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:52.458 [2024-12-10 21:35:53.227230] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:52.458 [2024-12-10 21:35:53.227397] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:53.027 21:35:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:53.027 21:35:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:08:53.027 21:35:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:53.027 21:35:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.027 21:35:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:53.027 [2024-12-10 21:35:53.600806] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:53.027 [2024-12-10 21:35:53.600866] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:53.027 [2024-12-10 21:35:53.600880] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:53.027 [2024-12-10 21:35:53.600893] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:53.027 21:35:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.027 21:35:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:08:53.027 21:35:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:53.027 21:35:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:53.027 21:35:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:53.027 21:35:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:53.027 21:35:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:53.027 21:35:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:53.027 21:35:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:53.027 21:35:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:53.027 21:35:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:53.027 21:35:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:53.027 21:35:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:53.027 21:35:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.027 21:35:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:53.027 21:35:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.027 21:35:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:53.027 "name": "Existed_Raid", 00:08:53.027 "uuid": "c7c4751a-50f0-4149-93ab-f0db811dfc9a", 00:08:53.027 "strip_size_kb": 64, 00:08:53.027 "state": "configuring", 00:08:53.027 "raid_level": "raid0", 00:08:53.027 "superblock": true, 00:08:53.027 "num_base_bdevs": 2, 00:08:53.027 "num_base_bdevs_discovered": 0, 00:08:53.027 "num_base_bdevs_operational": 2, 00:08:53.027 "base_bdevs_list": [ 00:08:53.027 { 00:08:53.027 "name": "BaseBdev1", 00:08:53.027 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:53.027 "is_configured": false, 00:08:53.027 "data_offset": 0, 00:08:53.027 "data_size": 0 00:08:53.027 }, 00:08:53.027 { 00:08:53.027 "name": "BaseBdev2", 00:08:53.027 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:53.027 "is_configured": false, 00:08:53.027 "data_offset": 0, 00:08:53.027 "data_size": 0 00:08:53.027 } 00:08:53.027 ] 00:08:53.027 }' 00:08:53.027 21:35:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:53.027 21:35:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:53.595 21:35:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:53.595 21:35:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.595 21:35:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:53.595 [2024-12-10 21:35:54.075955] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:53.595 [2024-12-10 21:35:54.076069] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:53.595 21:35:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.595 21:35:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:53.595 21:35:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.595 21:35:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:53.595 [2024-12-10 21:35:54.087945] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:53.595 [2024-12-10 21:35:54.088058] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:53.595 [2024-12-10 21:35:54.088109] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:53.595 [2024-12-10 21:35:54.088163] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:53.595 21:35:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.595 21:35:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:53.595 21:35:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.595 21:35:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:53.595 [2024-12-10 21:35:54.142464] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:53.595 BaseBdev1 00:08:53.595 21:35:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.595 21:35:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:53.595 21:35:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:53.595 21:35:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:53.595 21:35:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:53.595 21:35:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:53.595 21:35:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:53.595 21:35:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:53.595 21:35:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.595 21:35:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:53.595 21:35:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.595 21:35:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:53.595 21:35:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.595 21:35:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:53.595 [ 00:08:53.595 { 00:08:53.595 "name": "BaseBdev1", 00:08:53.595 "aliases": [ 00:08:53.595 "0cb5f438-e595-4e7e-8e37-e2ab45563e43" 00:08:53.595 ], 00:08:53.595 "product_name": "Malloc disk", 00:08:53.595 "block_size": 512, 00:08:53.595 "num_blocks": 65536, 00:08:53.595 "uuid": "0cb5f438-e595-4e7e-8e37-e2ab45563e43", 00:08:53.595 "assigned_rate_limits": { 00:08:53.595 "rw_ios_per_sec": 0, 00:08:53.595 "rw_mbytes_per_sec": 0, 00:08:53.595 "r_mbytes_per_sec": 0, 00:08:53.595 "w_mbytes_per_sec": 0 00:08:53.595 }, 00:08:53.595 "claimed": true, 00:08:53.595 "claim_type": "exclusive_write", 00:08:53.595 "zoned": false, 00:08:53.595 "supported_io_types": { 00:08:53.595 "read": true, 00:08:53.595 "write": true, 00:08:53.595 "unmap": true, 00:08:53.595 "flush": true, 00:08:53.595 "reset": true, 00:08:53.595 "nvme_admin": false, 00:08:53.595 "nvme_io": false, 00:08:53.595 "nvme_io_md": false, 00:08:53.595 "write_zeroes": true, 00:08:53.595 "zcopy": true, 00:08:53.595 "get_zone_info": false, 00:08:53.595 "zone_management": false, 00:08:53.595 "zone_append": false, 00:08:53.595 "compare": false, 00:08:53.595 "compare_and_write": false, 00:08:53.595 "abort": true, 00:08:53.595 "seek_hole": false, 00:08:53.595 "seek_data": false, 00:08:53.595 "copy": true, 00:08:53.595 "nvme_iov_md": false 00:08:53.595 }, 00:08:53.595 "memory_domains": [ 00:08:53.595 { 00:08:53.595 "dma_device_id": "system", 00:08:53.595 "dma_device_type": 1 00:08:53.595 }, 00:08:53.595 { 00:08:53.595 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:53.595 "dma_device_type": 2 00:08:53.595 } 00:08:53.595 ], 00:08:53.595 "driver_specific": {} 00:08:53.595 } 00:08:53.595 ] 00:08:53.595 21:35:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.595 21:35:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:53.595 21:35:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:08:53.595 21:35:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:53.595 21:35:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:53.595 21:35:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:53.595 21:35:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:53.595 21:35:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:53.595 21:35:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:53.595 21:35:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:53.595 21:35:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:53.595 21:35:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:53.595 21:35:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:53.595 21:35:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:53.595 21:35:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.595 21:35:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:53.595 21:35:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.595 21:35:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:53.595 "name": "Existed_Raid", 00:08:53.595 "uuid": "54f7e382-a75a-4d1e-8773-ba8758277529", 00:08:53.595 "strip_size_kb": 64, 00:08:53.595 "state": "configuring", 00:08:53.595 "raid_level": "raid0", 00:08:53.595 "superblock": true, 00:08:53.595 "num_base_bdevs": 2, 00:08:53.595 "num_base_bdevs_discovered": 1, 00:08:53.595 "num_base_bdevs_operational": 2, 00:08:53.595 "base_bdevs_list": [ 00:08:53.595 { 00:08:53.595 "name": "BaseBdev1", 00:08:53.595 "uuid": "0cb5f438-e595-4e7e-8e37-e2ab45563e43", 00:08:53.595 "is_configured": true, 00:08:53.595 "data_offset": 2048, 00:08:53.595 "data_size": 63488 00:08:53.595 }, 00:08:53.595 { 00:08:53.595 "name": "BaseBdev2", 00:08:53.595 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:53.595 "is_configured": false, 00:08:53.595 "data_offset": 0, 00:08:53.595 "data_size": 0 00:08:53.595 } 00:08:53.595 ] 00:08:53.595 }' 00:08:53.595 21:35:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:53.595 21:35:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:54.164 21:35:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:54.164 21:35:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.164 21:35:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:54.164 [2024-12-10 21:35:54.677613] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:54.164 [2024-12-10 21:35:54.677730] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:54.164 21:35:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.164 21:35:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:54.164 21:35:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.164 21:35:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:54.164 [2024-12-10 21:35:54.689654] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:54.164 [2024-12-10 21:35:54.691750] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:54.164 [2024-12-10 21:35:54.691855] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:54.164 21:35:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.164 21:35:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:54.164 21:35:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:54.164 21:35:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:08:54.164 21:35:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:54.164 21:35:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:54.164 21:35:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:54.164 21:35:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:54.164 21:35:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:54.164 21:35:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:54.164 21:35:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:54.164 21:35:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:54.164 21:35:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:54.164 21:35:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:54.164 21:35:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:54.164 21:35:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.164 21:35:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:54.164 21:35:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.164 21:35:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:54.164 "name": "Existed_Raid", 00:08:54.164 "uuid": "5b50d52b-dcc4-40bc-be2a-969c2ca7ab2a", 00:08:54.164 "strip_size_kb": 64, 00:08:54.164 "state": "configuring", 00:08:54.164 "raid_level": "raid0", 00:08:54.164 "superblock": true, 00:08:54.164 "num_base_bdevs": 2, 00:08:54.164 "num_base_bdevs_discovered": 1, 00:08:54.164 "num_base_bdevs_operational": 2, 00:08:54.164 "base_bdevs_list": [ 00:08:54.164 { 00:08:54.164 "name": "BaseBdev1", 00:08:54.164 "uuid": "0cb5f438-e595-4e7e-8e37-e2ab45563e43", 00:08:54.164 "is_configured": true, 00:08:54.164 "data_offset": 2048, 00:08:54.164 "data_size": 63488 00:08:54.164 }, 00:08:54.164 { 00:08:54.164 "name": "BaseBdev2", 00:08:54.164 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:54.164 "is_configured": false, 00:08:54.164 "data_offset": 0, 00:08:54.164 "data_size": 0 00:08:54.164 } 00:08:54.164 ] 00:08:54.164 }' 00:08:54.164 21:35:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:54.164 21:35:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:54.424 21:35:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:54.424 21:35:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.424 21:35:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:54.424 [2024-12-10 21:35:55.154744] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:54.424 [2024-12-10 21:35:55.155041] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:54.424 [2024-12-10 21:35:55.155057] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:54.424 [2024-12-10 21:35:55.155310] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:54.424 [2024-12-10 21:35:55.155497] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:54.424 [2024-12-10 21:35:55.155513] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:54.424 [2024-12-10 21:35:55.155704] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:54.424 BaseBdev2 00:08:54.424 21:35:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.424 21:35:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:54.424 21:35:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:54.424 21:35:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:54.424 21:35:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:54.424 21:35:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:54.424 21:35:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:54.424 21:35:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:54.424 21:35:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.424 21:35:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:54.424 21:35:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.424 21:35:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:54.424 21:35:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.424 21:35:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:54.424 [ 00:08:54.424 { 00:08:54.424 "name": "BaseBdev2", 00:08:54.424 "aliases": [ 00:08:54.424 "3450bad8-8980-40ed-a5a2-081fa793260e" 00:08:54.424 ], 00:08:54.424 "product_name": "Malloc disk", 00:08:54.424 "block_size": 512, 00:08:54.424 "num_blocks": 65536, 00:08:54.424 "uuid": "3450bad8-8980-40ed-a5a2-081fa793260e", 00:08:54.424 "assigned_rate_limits": { 00:08:54.424 "rw_ios_per_sec": 0, 00:08:54.424 "rw_mbytes_per_sec": 0, 00:08:54.424 "r_mbytes_per_sec": 0, 00:08:54.424 "w_mbytes_per_sec": 0 00:08:54.424 }, 00:08:54.424 "claimed": true, 00:08:54.424 "claim_type": "exclusive_write", 00:08:54.424 "zoned": false, 00:08:54.424 "supported_io_types": { 00:08:54.424 "read": true, 00:08:54.424 "write": true, 00:08:54.424 "unmap": true, 00:08:54.424 "flush": true, 00:08:54.424 "reset": true, 00:08:54.424 "nvme_admin": false, 00:08:54.424 "nvme_io": false, 00:08:54.424 "nvme_io_md": false, 00:08:54.424 "write_zeroes": true, 00:08:54.424 "zcopy": true, 00:08:54.424 "get_zone_info": false, 00:08:54.424 "zone_management": false, 00:08:54.424 "zone_append": false, 00:08:54.424 "compare": false, 00:08:54.424 "compare_and_write": false, 00:08:54.424 "abort": true, 00:08:54.424 "seek_hole": false, 00:08:54.424 "seek_data": false, 00:08:54.424 "copy": true, 00:08:54.424 "nvme_iov_md": false 00:08:54.424 }, 00:08:54.424 "memory_domains": [ 00:08:54.424 { 00:08:54.424 "dma_device_id": "system", 00:08:54.424 "dma_device_type": 1 00:08:54.424 }, 00:08:54.424 { 00:08:54.424 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:54.424 "dma_device_type": 2 00:08:54.424 } 00:08:54.424 ], 00:08:54.424 "driver_specific": {} 00:08:54.424 } 00:08:54.424 ] 00:08:54.424 21:35:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.424 21:35:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:54.424 21:35:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:54.424 21:35:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:54.424 21:35:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:08:54.424 21:35:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:54.424 21:35:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:54.424 21:35:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:54.424 21:35:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:54.424 21:35:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:54.424 21:35:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:54.424 21:35:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:54.424 21:35:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:54.424 21:35:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:54.425 21:35:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:54.425 21:35:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:54.425 21:35:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.425 21:35:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:54.684 21:35:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.684 21:35:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:54.684 "name": "Existed_Raid", 00:08:54.684 "uuid": "5b50d52b-dcc4-40bc-be2a-969c2ca7ab2a", 00:08:54.684 "strip_size_kb": 64, 00:08:54.684 "state": "online", 00:08:54.684 "raid_level": "raid0", 00:08:54.684 "superblock": true, 00:08:54.684 "num_base_bdevs": 2, 00:08:54.684 "num_base_bdevs_discovered": 2, 00:08:54.684 "num_base_bdevs_operational": 2, 00:08:54.684 "base_bdevs_list": [ 00:08:54.684 { 00:08:54.684 "name": "BaseBdev1", 00:08:54.684 "uuid": "0cb5f438-e595-4e7e-8e37-e2ab45563e43", 00:08:54.684 "is_configured": true, 00:08:54.684 "data_offset": 2048, 00:08:54.684 "data_size": 63488 00:08:54.684 }, 00:08:54.684 { 00:08:54.684 "name": "BaseBdev2", 00:08:54.684 "uuid": "3450bad8-8980-40ed-a5a2-081fa793260e", 00:08:54.684 "is_configured": true, 00:08:54.684 "data_offset": 2048, 00:08:54.684 "data_size": 63488 00:08:54.684 } 00:08:54.684 ] 00:08:54.684 }' 00:08:54.684 21:35:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:54.684 21:35:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:54.944 21:35:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:54.944 21:35:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:54.944 21:35:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:54.944 21:35:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:54.944 21:35:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:54.944 21:35:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:54.944 21:35:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:54.944 21:35:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:54.944 21:35:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.944 21:35:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:54.944 [2024-12-10 21:35:55.666292] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:54.944 21:35:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.944 21:35:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:54.944 "name": "Existed_Raid", 00:08:54.944 "aliases": [ 00:08:54.944 "5b50d52b-dcc4-40bc-be2a-969c2ca7ab2a" 00:08:54.944 ], 00:08:54.944 "product_name": "Raid Volume", 00:08:54.944 "block_size": 512, 00:08:54.944 "num_blocks": 126976, 00:08:54.944 "uuid": "5b50d52b-dcc4-40bc-be2a-969c2ca7ab2a", 00:08:54.944 "assigned_rate_limits": { 00:08:54.944 "rw_ios_per_sec": 0, 00:08:54.944 "rw_mbytes_per_sec": 0, 00:08:54.944 "r_mbytes_per_sec": 0, 00:08:54.944 "w_mbytes_per_sec": 0 00:08:54.944 }, 00:08:54.944 "claimed": false, 00:08:54.944 "zoned": false, 00:08:54.944 "supported_io_types": { 00:08:54.944 "read": true, 00:08:54.944 "write": true, 00:08:54.944 "unmap": true, 00:08:54.944 "flush": true, 00:08:54.944 "reset": true, 00:08:54.944 "nvme_admin": false, 00:08:54.944 "nvme_io": false, 00:08:54.944 "nvme_io_md": false, 00:08:54.944 "write_zeroes": true, 00:08:54.944 "zcopy": false, 00:08:54.944 "get_zone_info": false, 00:08:54.944 "zone_management": false, 00:08:54.944 "zone_append": false, 00:08:54.944 "compare": false, 00:08:54.944 "compare_and_write": false, 00:08:54.944 "abort": false, 00:08:54.944 "seek_hole": false, 00:08:54.944 "seek_data": false, 00:08:54.944 "copy": false, 00:08:54.944 "nvme_iov_md": false 00:08:54.944 }, 00:08:54.944 "memory_domains": [ 00:08:54.944 { 00:08:54.944 "dma_device_id": "system", 00:08:54.944 "dma_device_type": 1 00:08:54.944 }, 00:08:54.944 { 00:08:54.944 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:54.944 "dma_device_type": 2 00:08:54.944 }, 00:08:54.944 { 00:08:54.944 "dma_device_id": "system", 00:08:54.944 "dma_device_type": 1 00:08:54.944 }, 00:08:54.944 { 00:08:54.944 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:54.944 "dma_device_type": 2 00:08:54.944 } 00:08:54.944 ], 00:08:54.944 "driver_specific": { 00:08:54.944 "raid": { 00:08:54.944 "uuid": "5b50d52b-dcc4-40bc-be2a-969c2ca7ab2a", 00:08:54.944 "strip_size_kb": 64, 00:08:54.944 "state": "online", 00:08:54.944 "raid_level": "raid0", 00:08:54.944 "superblock": true, 00:08:54.944 "num_base_bdevs": 2, 00:08:54.944 "num_base_bdevs_discovered": 2, 00:08:54.944 "num_base_bdevs_operational": 2, 00:08:54.944 "base_bdevs_list": [ 00:08:54.944 { 00:08:54.944 "name": "BaseBdev1", 00:08:54.944 "uuid": "0cb5f438-e595-4e7e-8e37-e2ab45563e43", 00:08:54.944 "is_configured": true, 00:08:54.944 "data_offset": 2048, 00:08:54.944 "data_size": 63488 00:08:54.944 }, 00:08:54.944 { 00:08:54.944 "name": "BaseBdev2", 00:08:54.944 "uuid": "3450bad8-8980-40ed-a5a2-081fa793260e", 00:08:54.944 "is_configured": true, 00:08:54.944 "data_offset": 2048, 00:08:54.944 "data_size": 63488 00:08:54.944 } 00:08:54.944 ] 00:08:54.944 } 00:08:54.944 } 00:08:54.944 }' 00:08:54.944 21:35:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:55.237 21:35:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:55.237 BaseBdev2' 00:08:55.237 21:35:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:55.237 21:35:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:55.237 21:35:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:55.237 21:35:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:55.237 21:35:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:55.237 21:35:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.237 21:35:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:55.237 21:35:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.237 21:35:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:55.237 21:35:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:55.237 21:35:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:55.237 21:35:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:55.237 21:35:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:55.237 21:35:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.237 21:35:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:55.237 21:35:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.237 21:35:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:55.237 21:35:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:55.237 21:35:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:55.237 21:35:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.237 21:35:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:55.237 [2024-12-10 21:35:55.909576] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:55.237 [2024-12-10 21:35:55.909618] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:55.237 [2024-12-10 21:35:55.909678] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:55.496 21:35:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.496 21:35:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:55.496 21:35:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:08:55.496 21:35:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:55.496 21:35:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:08:55.497 21:35:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:55.497 21:35:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:08:55.497 21:35:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:55.497 21:35:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:55.497 21:35:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:55.497 21:35:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:55.497 21:35:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:55.497 21:35:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:55.497 21:35:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:55.497 21:35:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:55.497 21:35:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:55.497 21:35:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:55.497 21:35:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:55.497 21:35:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.497 21:35:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:55.497 21:35:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.497 21:35:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:55.497 "name": "Existed_Raid", 00:08:55.497 "uuid": "5b50d52b-dcc4-40bc-be2a-969c2ca7ab2a", 00:08:55.497 "strip_size_kb": 64, 00:08:55.497 "state": "offline", 00:08:55.497 "raid_level": "raid0", 00:08:55.497 "superblock": true, 00:08:55.497 "num_base_bdevs": 2, 00:08:55.497 "num_base_bdevs_discovered": 1, 00:08:55.497 "num_base_bdevs_operational": 1, 00:08:55.497 "base_bdevs_list": [ 00:08:55.497 { 00:08:55.497 "name": null, 00:08:55.497 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:55.497 "is_configured": false, 00:08:55.497 "data_offset": 0, 00:08:55.497 "data_size": 63488 00:08:55.497 }, 00:08:55.497 { 00:08:55.497 "name": "BaseBdev2", 00:08:55.497 "uuid": "3450bad8-8980-40ed-a5a2-081fa793260e", 00:08:55.497 "is_configured": true, 00:08:55.497 "data_offset": 2048, 00:08:55.497 "data_size": 63488 00:08:55.497 } 00:08:55.497 ] 00:08:55.497 }' 00:08:55.497 21:35:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:55.497 21:35:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:55.756 21:35:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:55.756 21:35:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:55.756 21:35:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:55.756 21:35:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:55.756 21:35:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.756 21:35:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:55.756 21:35:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.756 21:35:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:55.756 21:35:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:55.756 21:35:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:55.756 21:35:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.756 21:35:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:55.756 [2024-12-10 21:35:56.504376] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:55.756 [2024-12-10 21:35:56.504552] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:56.014 21:35:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.014 21:35:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:56.014 21:35:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:56.014 21:35:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:56.014 21:35:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.014 21:35:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:56.014 21:35:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:56.014 21:35:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.014 21:35:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:56.014 21:35:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:56.014 21:35:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:08:56.014 21:35:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 61037 00:08:56.014 21:35:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 61037 ']' 00:08:56.014 21:35:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 61037 00:08:56.014 21:35:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:08:56.015 21:35:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:56.015 21:35:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61037 00:08:56.015 killing process with pid 61037 00:08:56.015 21:35:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:56.015 21:35:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:56.015 21:35:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61037' 00:08:56.015 21:35:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 61037 00:08:56.015 [2024-12-10 21:35:56.718118] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:56.015 21:35:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 61037 00:08:56.015 [2024-12-10 21:35:56.736316] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:57.394 21:35:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:08:57.394 00:08:57.394 real 0m5.299s 00:08:57.394 user 0m7.699s 00:08:57.394 sys 0m0.823s 00:08:57.394 21:35:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:57.394 ************************************ 00:08:57.394 END TEST raid_state_function_test_sb 00:08:57.394 ************************************ 00:08:57.394 21:35:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:57.394 21:35:57 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 2 00:08:57.394 21:35:57 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:57.394 21:35:57 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:57.394 21:35:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:57.394 ************************************ 00:08:57.394 START TEST raid_superblock_test 00:08:57.394 ************************************ 00:08:57.394 21:35:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 2 00:08:57.394 21:35:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:08:57.394 21:35:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:08:57.394 21:35:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:08:57.394 21:35:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:08:57.394 21:35:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:08:57.394 21:35:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:08:57.394 21:35:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:08:57.394 21:35:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:08:57.394 21:35:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:08:57.394 21:35:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:08:57.394 21:35:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:08:57.394 21:35:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:08:57.394 21:35:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:08:57.394 21:35:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:08:57.394 21:35:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:08:57.394 21:35:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:08:57.394 21:35:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=61291 00:08:57.394 21:35:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:08:57.394 21:35:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 61291 00:08:57.394 21:35:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 61291 ']' 00:08:57.394 21:35:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:57.394 21:35:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:57.394 21:35:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:57.394 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:57.394 21:35:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:57.394 21:35:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.394 [2024-12-10 21:35:58.057932] Starting SPDK v25.01-pre git sha1 cec5ba284 / DPDK 24.03.0 initialization... 00:08:57.394 [2024-12-10 21:35:58.058137] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61291 ] 00:08:57.654 [2024-12-10 21:35:58.233229] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:57.654 [2024-12-10 21:35:58.355905] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:57.913 [2024-12-10 21:35:58.570056] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:57.913 [2024-12-10 21:35:58.570092] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:58.173 21:35:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:58.173 21:35:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:08:58.173 21:35:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:08:58.173 21:35:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:58.173 21:35:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:08:58.173 21:35:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:08:58.173 21:35:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:08:58.173 21:35:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:58.173 21:35:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:58.173 21:35:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:58.173 21:35:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:08:58.173 21:35:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.173 21:35:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.173 malloc1 00:08:58.173 21:35:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.173 21:35:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:58.173 21:35:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.173 21:35:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.173 [2024-12-10 21:35:58.944033] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:58.173 [2024-12-10 21:35:58.944168] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:58.173 [2024-12-10 21:35:58.944210] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:08:58.173 [2024-12-10 21:35:58.944268] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:58.173 [2024-12-10 21:35:58.946501] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:58.173 [2024-12-10 21:35:58.946579] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:58.173 pt1 00:08:58.173 21:35:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.173 21:35:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:58.173 21:35:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:58.173 21:35:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:08:58.173 21:35:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:08:58.173 21:35:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:08:58.173 21:35:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:58.174 21:35:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:58.174 21:35:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:58.174 21:35:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:08:58.174 21:35:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.174 21:35:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.433 malloc2 00:08:58.433 21:35:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.433 21:35:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:58.433 21:35:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.433 21:35:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.433 [2024-12-10 21:35:59.005522] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:58.433 [2024-12-10 21:35:59.005647] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:58.433 [2024-12-10 21:35:59.005676] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:08:58.433 [2024-12-10 21:35:59.005684] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:58.433 [2024-12-10 21:35:59.007911] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:58.433 [2024-12-10 21:35:59.007964] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:58.433 pt2 00:08:58.433 21:35:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.433 21:35:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:58.433 21:35:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:58.433 21:35:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:08:58.433 21:35:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.433 21:35:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.433 [2024-12-10 21:35:59.017575] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:58.433 [2024-12-10 21:35:59.019391] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:58.433 [2024-12-10 21:35:59.019599] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:08:58.433 [2024-12-10 21:35:59.019614] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:58.433 [2024-12-10 21:35:59.019899] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:58.433 [2024-12-10 21:35:59.020064] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:08:58.433 [2024-12-10 21:35:59.020076] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:08:58.433 [2024-12-10 21:35:59.020260] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:58.433 21:35:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.433 21:35:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:08:58.433 21:35:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:58.433 21:35:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:58.433 21:35:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:58.433 21:35:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:58.433 21:35:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:58.433 21:35:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:58.433 21:35:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:58.433 21:35:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:58.433 21:35:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:58.433 21:35:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:58.433 21:35:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:58.433 21:35:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.433 21:35:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.433 21:35:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.433 21:35:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:58.433 "name": "raid_bdev1", 00:08:58.433 "uuid": "490d7276-c9e9-488f-8a44-a8c43e5ae680", 00:08:58.433 "strip_size_kb": 64, 00:08:58.433 "state": "online", 00:08:58.433 "raid_level": "raid0", 00:08:58.433 "superblock": true, 00:08:58.433 "num_base_bdevs": 2, 00:08:58.433 "num_base_bdevs_discovered": 2, 00:08:58.433 "num_base_bdevs_operational": 2, 00:08:58.433 "base_bdevs_list": [ 00:08:58.433 { 00:08:58.433 "name": "pt1", 00:08:58.433 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:58.433 "is_configured": true, 00:08:58.433 "data_offset": 2048, 00:08:58.433 "data_size": 63488 00:08:58.433 }, 00:08:58.433 { 00:08:58.433 "name": "pt2", 00:08:58.433 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:58.433 "is_configured": true, 00:08:58.433 "data_offset": 2048, 00:08:58.433 "data_size": 63488 00:08:58.433 } 00:08:58.433 ] 00:08:58.433 }' 00:08:58.433 21:35:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:58.433 21:35:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.692 21:35:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:08:58.692 21:35:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:58.692 21:35:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:58.692 21:35:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:58.692 21:35:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:58.692 21:35:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:58.692 21:35:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:58.692 21:35:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.692 21:35:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.692 21:35:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:58.692 [2024-12-10 21:35:59.465031] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:58.952 21:35:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.952 21:35:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:58.952 "name": "raid_bdev1", 00:08:58.952 "aliases": [ 00:08:58.952 "490d7276-c9e9-488f-8a44-a8c43e5ae680" 00:08:58.952 ], 00:08:58.952 "product_name": "Raid Volume", 00:08:58.952 "block_size": 512, 00:08:58.952 "num_blocks": 126976, 00:08:58.952 "uuid": "490d7276-c9e9-488f-8a44-a8c43e5ae680", 00:08:58.952 "assigned_rate_limits": { 00:08:58.952 "rw_ios_per_sec": 0, 00:08:58.952 "rw_mbytes_per_sec": 0, 00:08:58.952 "r_mbytes_per_sec": 0, 00:08:58.952 "w_mbytes_per_sec": 0 00:08:58.952 }, 00:08:58.952 "claimed": false, 00:08:58.952 "zoned": false, 00:08:58.952 "supported_io_types": { 00:08:58.952 "read": true, 00:08:58.952 "write": true, 00:08:58.952 "unmap": true, 00:08:58.952 "flush": true, 00:08:58.952 "reset": true, 00:08:58.952 "nvme_admin": false, 00:08:58.952 "nvme_io": false, 00:08:58.952 "nvme_io_md": false, 00:08:58.952 "write_zeroes": true, 00:08:58.952 "zcopy": false, 00:08:58.952 "get_zone_info": false, 00:08:58.952 "zone_management": false, 00:08:58.952 "zone_append": false, 00:08:58.952 "compare": false, 00:08:58.952 "compare_and_write": false, 00:08:58.952 "abort": false, 00:08:58.952 "seek_hole": false, 00:08:58.952 "seek_data": false, 00:08:58.952 "copy": false, 00:08:58.952 "nvme_iov_md": false 00:08:58.952 }, 00:08:58.952 "memory_domains": [ 00:08:58.952 { 00:08:58.952 "dma_device_id": "system", 00:08:58.952 "dma_device_type": 1 00:08:58.953 }, 00:08:58.953 { 00:08:58.953 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:58.953 "dma_device_type": 2 00:08:58.953 }, 00:08:58.953 { 00:08:58.953 "dma_device_id": "system", 00:08:58.953 "dma_device_type": 1 00:08:58.953 }, 00:08:58.953 { 00:08:58.953 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:58.953 "dma_device_type": 2 00:08:58.953 } 00:08:58.953 ], 00:08:58.953 "driver_specific": { 00:08:58.953 "raid": { 00:08:58.953 "uuid": "490d7276-c9e9-488f-8a44-a8c43e5ae680", 00:08:58.953 "strip_size_kb": 64, 00:08:58.953 "state": "online", 00:08:58.953 "raid_level": "raid0", 00:08:58.953 "superblock": true, 00:08:58.953 "num_base_bdevs": 2, 00:08:58.953 "num_base_bdevs_discovered": 2, 00:08:58.953 "num_base_bdevs_operational": 2, 00:08:58.953 "base_bdevs_list": [ 00:08:58.953 { 00:08:58.953 "name": "pt1", 00:08:58.953 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:58.953 "is_configured": true, 00:08:58.953 "data_offset": 2048, 00:08:58.953 "data_size": 63488 00:08:58.953 }, 00:08:58.953 { 00:08:58.953 "name": "pt2", 00:08:58.953 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:58.953 "is_configured": true, 00:08:58.953 "data_offset": 2048, 00:08:58.953 "data_size": 63488 00:08:58.953 } 00:08:58.953 ] 00:08:58.953 } 00:08:58.953 } 00:08:58.953 }' 00:08:58.953 21:35:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:58.953 21:35:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:58.953 pt2' 00:08:58.953 21:35:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:58.953 21:35:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:58.953 21:35:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:58.953 21:35:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:58.953 21:35:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:58.953 21:35:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.953 21:35:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.953 21:35:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.953 21:35:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:58.953 21:35:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:58.953 21:35:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:58.953 21:35:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:58.953 21:35:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:58.953 21:35:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.953 21:35:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.953 21:35:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.953 21:35:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:58.953 21:35:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:58.953 21:35:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:58.953 21:35:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.953 21:35:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.953 21:35:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:08:58.953 [2024-12-10 21:35:59.656730] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:58.953 21:35:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.953 21:35:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=490d7276-c9e9-488f-8a44-a8c43e5ae680 00:08:58.953 21:35:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 490d7276-c9e9-488f-8a44-a8c43e5ae680 ']' 00:08:58.953 21:35:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:58.953 21:35:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.953 21:35:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.953 [2024-12-10 21:35:59.704333] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:58.953 [2024-12-10 21:35:59.704365] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:58.953 [2024-12-10 21:35:59.704474] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:58.953 [2024-12-10 21:35:59.704536] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:58.953 [2024-12-10 21:35:59.704550] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:08:58.953 21:35:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.953 21:35:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:08:58.953 21:35:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:58.953 21:35:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.953 21:35:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.953 21:35:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.213 21:35:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:08:59.213 21:35:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:08:59.213 21:35:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:59.213 21:35:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:08:59.213 21:35:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.213 21:35:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.213 21:35:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.213 21:35:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:59.213 21:35:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:08:59.213 21:35:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.213 21:35:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.213 21:35:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.213 21:35:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:08:59.213 21:35:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.213 21:35:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.213 21:35:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:08:59.213 21:35:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.213 21:35:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:08:59.213 21:35:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:59.213 21:35:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:08:59.213 21:35:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:59.213 21:35:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:08:59.213 21:35:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:59.213 21:35:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:08:59.213 21:35:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:59.213 21:35:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:59.213 21:35:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.213 21:35:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.213 [2024-12-10 21:35:59.848103] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:08:59.213 [2024-12-10 21:35:59.849963] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:08:59.213 [2024-12-10 21:35:59.850026] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:08:59.213 [2024-12-10 21:35:59.850077] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:08:59.213 [2024-12-10 21:35:59.850091] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:59.213 [2024-12-10 21:35:59.850103] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:08:59.213 request: 00:08:59.213 { 00:08:59.213 "name": "raid_bdev1", 00:08:59.213 "raid_level": "raid0", 00:08:59.213 "base_bdevs": [ 00:08:59.213 "malloc1", 00:08:59.213 "malloc2" 00:08:59.213 ], 00:08:59.213 "strip_size_kb": 64, 00:08:59.213 "superblock": false, 00:08:59.213 "method": "bdev_raid_create", 00:08:59.213 "req_id": 1 00:08:59.213 } 00:08:59.213 Got JSON-RPC error response 00:08:59.213 response: 00:08:59.213 { 00:08:59.213 "code": -17, 00:08:59.213 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:08:59.213 } 00:08:59.213 21:35:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:08:59.213 21:35:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:08:59.213 21:35:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:59.213 21:35:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:59.213 21:35:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:59.213 21:35:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:59.213 21:35:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.213 21:35:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.213 21:35:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:08:59.213 21:35:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.213 21:35:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:08:59.213 21:35:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:08:59.213 21:35:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:59.213 21:35:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.213 21:35:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.213 [2024-12-10 21:35:59.912001] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:59.213 [2024-12-10 21:35:59.912140] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:59.213 [2024-12-10 21:35:59.912184] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:08:59.213 [2024-12-10 21:35:59.912216] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:59.213 [2024-12-10 21:35:59.914442] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:59.213 [2024-12-10 21:35:59.914516] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:59.213 [2024-12-10 21:35:59.914634] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:59.213 [2024-12-10 21:35:59.914715] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:59.213 pt1 00:08:59.213 21:35:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.213 21:35:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 2 00:08:59.213 21:35:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:59.213 21:35:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:59.213 21:35:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:59.213 21:35:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:59.213 21:35:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:59.213 21:35:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:59.213 21:35:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:59.213 21:35:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:59.213 21:35:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:59.213 21:35:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:59.213 21:35:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.213 21:35:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.213 21:35:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:59.214 21:35:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.214 21:35:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:59.214 "name": "raid_bdev1", 00:08:59.214 "uuid": "490d7276-c9e9-488f-8a44-a8c43e5ae680", 00:08:59.214 "strip_size_kb": 64, 00:08:59.214 "state": "configuring", 00:08:59.214 "raid_level": "raid0", 00:08:59.214 "superblock": true, 00:08:59.214 "num_base_bdevs": 2, 00:08:59.214 "num_base_bdevs_discovered": 1, 00:08:59.214 "num_base_bdevs_operational": 2, 00:08:59.214 "base_bdevs_list": [ 00:08:59.214 { 00:08:59.214 "name": "pt1", 00:08:59.214 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:59.214 "is_configured": true, 00:08:59.214 "data_offset": 2048, 00:08:59.214 "data_size": 63488 00:08:59.214 }, 00:08:59.214 { 00:08:59.214 "name": null, 00:08:59.214 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:59.214 "is_configured": false, 00:08:59.214 "data_offset": 2048, 00:08:59.214 "data_size": 63488 00:08:59.214 } 00:08:59.214 ] 00:08:59.214 }' 00:08:59.214 21:35:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:59.214 21:35:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.792 21:36:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:08:59.792 21:36:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:08:59.792 21:36:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:59.792 21:36:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:59.792 21:36:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.792 21:36:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.792 [2024-12-10 21:36:00.391231] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:59.792 [2024-12-10 21:36:00.391312] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:59.793 [2024-12-10 21:36:00.391336] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:08:59.793 [2024-12-10 21:36:00.391347] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:59.793 [2024-12-10 21:36:00.391830] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:59.793 [2024-12-10 21:36:00.391855] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:59.793 [2024-12-10 21:36:00.391944] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:59.793 [2024-12-10 21:36:00.391970] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:59.793 [2024-12-10 21:36:00.392099] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:59.793 [2024-12-10 21:36:00.392111] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:59.793 [2024-12-10 21:36:00.392390] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:59.793 [2024-12-10 21:36:00.392585] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:59.793 [2024-12-10 21:36:00.392596] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:59.793 [2024-12-10 21:36:00.392753] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:59.793 pt2 00:08:59.793 21:36:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.793 21:36:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:59.793 21:36:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:59.793 21:36:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:08:59.793 21:36:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:59.793 21:36:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:59.793 21:36:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:59.793 21:36:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:59.793 21:36:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:59.793 21:36:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:59.793 21:36:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:59.793 21:36:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:59.793 21:36:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:59.793 21:36:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:59.793 21:36:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:59.793 21:36:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.793 21:36:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.793 21:36:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.793 21:36:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:59.793 "name": "raid_bdev1", 00:08:59.793 "uuid": "490d7276-c9e9-488f-8a44-a8c43e5ae680", 00:08:59.793 "strip_size_kb": 64, 00:08:59.793 "state": "online", 00:08:59.793 "raid_level": "raid0", 00:08:59.793 "superblock": true, 00:08:59.793 "num_base_bdevs": 2, 00:08:59.793 "num_base_bdevs_discovered": 2, 00:08:59.793 "num_base_bdevs_operational": 2, 00:08:59.793 "base_bdevs_list": [ 00:08:59.793 { 00:08:59.793 "name": "pt1", 00:08:59.793 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:59.793 "is_configured": true, 00:08:59.793 "data_offset": 2048, 00:08:59.793 "data_size": 63488 00:08:59.793 }, 00:08:59.793 { 00:08:59.793 "name": "pt2", 00:08:59.793 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:59.793 "is_configured": true, 00:08:59.793 "data_offset": 2048, 00:08:59.793 "data_size": 63488 00:08:59.793 } 00:08:59.793 ] 00:08:59.793 }' 00:08:59.793 21:36:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:59.793 21:36:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.052 21:36:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:09:00.052 21:36:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:00.052 21:36:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:00.052 21:36:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:00.052 21:36:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:00.052 21:36:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:00.052 21:36:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:00.052 21:36:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:00.052 21:36:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.052 21:36:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.052 [2024-12-10 21:36:00.790818] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:00.052 21:36:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.052 21:36:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:00.052 "name": "raid_bdev1", 00:09:00.052 "aliases": [ 00:09:00.052 "490d7276-c9e9-488f-8a44-a8c43e5ae680" 00:09:00.052 ], 00:09:00.052 "product_name": "Raid Volume", 00:09:00.052 "block_size": 512, 00:09:00.052 "num_blocks": 126976, 00:09:00.052 "uuid": "490d7276-c9e9-488f-8a44-a8c43e5ae680", 00:09:00.052 "assigned_rate_limits": { 00:09:00.052 "rw_ios_per_sec": 0, 00:09:00.052 "rw_mbytes_per_sec": 0, 00:09:00.052 "r_mbytes_per_sec": 0, 00:09:00.052 "w_mbytes_per_sec": 0 00:09:00.052 }, 00:09:00.052 "claimed": false, 00:09:00.052 "zoned": false, 00:09:00.052 "supported_io_types": { 00:09:00.052 "read": true, 00:09:00.052 "write": true, 00:09:00.052 "unmap": true, 00:09:00.052 "flush": true, 00:09:00.052 "reset": true, 00:09:00.052 "nvme_admin": false, 00:09:00.052 "nvme_io": false, 00:09:00.052 "nvme_io_md": false, 00:09:00.052 "write_zeroes": true, 00:09:00.052 "zcopy": false, 00:09:00.052 "get_zone_info": false, 00:09:00.052 "zone_management": false, 00:09:00.052 "zone_append": false, 00:09:00.052 "compare": false, 00:09:00.052 "compare_and_write": false, 00:09:00.052 "abort": false, 00:09:00.052 "seek_hole": false, 00:09:00.052 "seek_data": false, 00:09:00.052 "copy": false, 00:09:00.052 "nvme_iov_md": false 00:09:00.052 }, 00:09:00.052 "memory_domains": [ 00:09:00.052 { 00:09:00.052 "dma_device_id": "system", 00:09:00.052 "dma_device_type": 1 00:09:00.052 }, 00:09:00.052 { 00:09:00.052 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:00.052 "dma_device_type": 2 00:09:00.052 }, 00:09:00.052 { 00:09:00.052 "dma_device_id": "system", 00:09:00.052 "dma_device_type": 1 00:09:00.052 }, 00:09:00.052 { 00:09:00.052 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:00.052 "dma_device_type": 2 00:09:00.052 } 00:09:00.052 ], 00:09:00.052 "driver_specific": { 00:09:00.052 "raid": { 00:09:00.052 "uuid": "490d7276-c9e9-488f-8a44-a8c43e5ae680", 00:09:00.052 "strip_size_kb": 64, 00:09:00.052 "state": "online", 00:09:00.052 "raid_level": "raid0", 00:09:00.052 "superblock": true, 00:09:00.052 "num_base_bdevs": 2, 00:09:00.052 "num_base_bdevs_discovered": 2, 00:09:00.052 "num_base_bdevs_operational": 2, 00:09:00.052 "base_bdevs_list": [ 00:09:00.052 { 00:09:00.052 "name": "pt1", 00:09:00.052 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:00.052 "is_configured": true, 00:09:00.052 "data_offset": 2048, 00:09:00.052 "data_size": 63488 00:09:00.052 }, 00:09:00.052 { 00:09:00.052 "name": "pt2", 00:09:00.053 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:00.053 "is_configured": true, 00:09:00.053 "data_offset": 2048, 00:09:00.053 "data_size": 63488 00:09:00.053 } 00:09:00.053 ] 00:09:00.053 } 00:09:00.053 } 00:09:00.053 }' 00:09:00.053 21:36:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:00.312 21:36:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:00.312 pt2' 00:09:00.312 21:36:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:00.312 21:36:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:00.312 21:36:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:00.312 21:36:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:00.312 21:36:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:00.312 21:36:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.312 21:36:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.312 21:36:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.312 21:36:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:00.312 21:36:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:00.312 21:36:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:00.312 21:36:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:00.312 21:36:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:00.312 21:36:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.312 21:36:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.312 21:36:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.312 21:36:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:00.312 21:36:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:00.312 21:36:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:09:00.312 21:36:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:00.312 21:36:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.312 21:36:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.312 [2024-12-10 21:36:01.030385] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:00.312 21:36:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.312 21:36:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 490d7276-c9e9-488f-8a44-a8c43e5ae680 '!=' 490d7276-c9e9-488f-8a44-a8c43e5ae680 ']' 00:09:00.312 21:36:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:09:00.312 21:36:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:00.312 21:36:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:00.312 21:36:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 61291 00:09:00.312 21:36:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 61291 ']' 00:09:00.312 21:36:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 61291 00:09:00.312 21:36:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:09:00.312 21:36:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:00.312 21:36:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61291 00:09:00.572 killing process with pid 61291 00:09:00.572 21:36:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:00.572 21:36:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:00.572 21:36:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61291' 00:09:00.572 21:36:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 61291 00:09:00.572 [2024-12-10 21:36:01.112604] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:00.572 [2024-12-10 21:36:01.112709] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:00.572 [2024-12-10 21:36:01.112758] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:00.572 [2024-12-10 21:36:01.112770] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:00.572 21:36:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 61291 00:09:00.572 [2024-12-10 21:36:01.326337] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:01.951 21:36:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:09:01.951 00:09:01.951 real 0m4.524s 00:09:01.951 user 0m6.310s 00:09:01.951 sys 0m0.740s 00:09:01.951 ************************************ 00:09:01.951 END TEST raid_superblock_test 00:09:01.951 ************************************ 00:09:01.951 21:36:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:01.951 21:36:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.951 21:36:02 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 2 read 00:09:01.951 21:36:02 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:01.951 21:36:02 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:01.951 21:36:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:01.951 ************************************ 00:09:01.951 START TEST raid_read_error_test 00:09:01.951 ************************************ 00:09:01.951 21:36:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 2 read 00:09:01.951 21:36:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:09:01.951 21:36:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:09:01.951 21:36:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:09:01.951 21:36:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:01.951 21:36:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:01.951 21:36:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:01.951 21:36:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:01.951 21:36:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:01.951 21:36:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:01.951 21:36:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:01.951 21:36:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:01.951 21:36:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:09:01.951 21:36:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:01.951 21:36:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:01.951 21:36:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:01.951 21:36:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:01.951 21:36:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:01.951 21:36:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:01.951 21:36:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:09:01.951 21:36:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:01.951 21:36:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:01.951 21:36:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:01.951 21:36:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.lSabV6S8QZ 00:09:01.951 21:36:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=61497 00:09:01.951 21:36:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:01.951 21:36:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 61497 00:09:01.951 21:36:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 61497 ']' 00:09:01.951 21:36:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:01.951 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:01.951 21:36:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:01.951 21:36:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:01.951 21:36:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:01.951 21:36:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.951 [2024-12-10 21:36:02.653928] Starting SPDK v25.01-pre git sha1 cec5ba284 / DPDK 24.03.0 initialization... 00:09:01.951 [2024-12-10 21:36:02.654045] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61497 ] 00:09:02.211 [2024-12-10 21:36:02.810942] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:02.211 [2024-12-10 21:36:02.931633] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:02.471 [2024-12-10 21:36:03.134188] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:02.471 [2024-12-10 21:36:03.134237] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:02.730 21:36:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:02.730 21:36:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:09:02.730 21:36:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:02.730 21:36:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:02.730 21:36:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.730 21:36:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.990 BaseBdev1_malloc 00:09:02.990 21:36:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.990 21:36:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:02.990 21:36:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.990 21:36:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.990 true 00:09:02.990 21:36:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.990 21:36:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:02.990 21:36:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.990 21:36:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.990 [2024-12-10 21:36:03.565707] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:02.990 [2024-12-10 21:36:03.565811] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:02.990 [2024-12-10 21:36:03.565850] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:02.990 [2024-12-10 21:36:03.565881] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:02.990 [2024-12-10 21:36:03.568002] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:02.990 [2024-12-10 21:36:03.568085] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:02.990 BaseBdev1 00:09:02.990 21:36:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.990 21:36:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:02.990 21:36:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:02.990 21:36:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.990 21:36:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.990 BaseBdev2_malloc 00:09:02.990 21:36:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.990 21:36:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:02.990 21:36:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.990 21:36:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.990 true 00:09:02.990 21:36:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.990 21:36:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:02.990 21:36:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.990 21:36:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.990 [2024-12-10 21:36:03.633957] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:02.990 [2024-12-10 21:36:03.634011] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:02.990 [2024-12-10 21:36:03.634028] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:02.990 [2024-12-10 21:36:03.634039] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:02.990 [2024-12-10 21:36:03.636126] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:02.990 [2024-12-10 21:36:03.636167] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:02.990 BaseBdev2 00:09:02.990 21:36:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.990 21:36:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:09:02.990 21:36:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.990 21:36:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.990 [2024-12-10 21:36:03.645986] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:02.990 [2024-12-10 21:36:03.647839] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:02.990 [2024-12-10 21:36:03.648033] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:02.990 [2024-12-10 21:36:03.648051] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:09:02.990 [2024-12-10 21:36:03.648266] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:02.990 [2024-12-10 21:36:03.648449] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:02.990 [2024-12-10 21:36:03.648479] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:02.990 [2024-12-10 21:36:03.648654] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:02.990 21:36:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.990 21:36:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:09:02.990 21:36:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:02.990 21:36:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:02.990 21:36:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:02.990 21:36:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:02.990 21:36:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:02.990 21:36:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:02.990 21:36:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:02.990 21:36:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:02.990 21:36:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:02.990 21:36:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:02.990 21:36:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:02.991 21:36:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.991 21:36:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.991 21:36:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.991 21:36:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:02.991 "name": "raid_bdev1", 00:09:02.991 "uuid": "95c1e9ce-b887-42c6-923f-4146fdf7dfce", 00:09:02.991 "strip_size_kb": 64, 00:09:02.991 "state": "online", 00:09:02.991 "raid_level": "raid0", 00:09:02.991 "superblock": true, 00:09:02.991 "num_base_bdevs": 2, 00:09:02.991 "num_base_bdevs_discovered": 2, 00:09:02.991 "num_base_bdevs_operational": 2, 00:09:02.991 "base_bdevs_list": [ 00:09:02.991 { 00:09:02.991 "name": "BaseBdev1", 00:09:02.991 "uuid": "dbfa93e9-1664-58d5-8b5b-af6b2608c419", 00:09:02.991 "is_configured": true, 00:09:02.991 "data_offset": 2048, 00:09:02.991 "data_size": 63488 00:09:02.991 }, 00:09:02.991 { 00:09:02.991 "name": "BaseBdev2", 00:09:02.991 "uuid": "69058c00-9a27-56bb-8253-2cba0b75990b", 00:09:02.991 "is_configured": true, 00:09:02.991 "data_offset": 2048, 00:09:02.991 "data_size": 63488 00:09:02.991 } 00:09:02.991 ] 00:09:02.991 }' 00:09:02.991 21:36:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:02.991 21:36:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.559 21:36:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:03.559 21:36:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:03.559 [2024-12-10 21:36:04.218633] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:09:04.498 21:36:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:09:04.498 21:36:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.498 21:36:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.498 21:36:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.498 21:36:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:04.498 21:36:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:09:04.498 21:36:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:09:04.498 21:36:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:09:04.498 21:36:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:04.498 21:36:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:04.498 21:36:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:04.498 21:36:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:04.498 21:36:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:04.498 21:36:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:04.498 21:36:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:04.498 21:36:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:04.498 21:36:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:04.498 21:36:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:04.498 21:36:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.498 21:36:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:04.498 21:36:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.498 21:36:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.498 21:36:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:04.498 "name": "raid_bdev1", 00:09:04.498 "uuid": "95c1e9ce-b887-42c6-923f-4146fdf7dfce", 00:09:04.498 "strip_size_kb": 64, 00:09:04.498 "state": "online", 00:09:04.498 "raid_level": "raid0", 00:09:04.498 "superblock": true, 00:09:04.498 "num_base_bdevs": 2, 00:09:04.498 "num_base_bdevs_discovered": 2, 00:09:04.498 "num_base_bdevs_operational": 2, 00:09:04.498 "base_bdevs_list": [ 00:09:04.498 { 00:09:04.498 "name": "BaseBdev1", 00:09:04.498 "uuid": "dbfa93e9-1664-58d5-8b5b-af6b2608c419", 00:09:04.498 "is_configured": true, 00:09:04.498 "data_offset": 2048, 00:09:04.498 "data_size": 63488 00:09:04.498 }, 00:09:04.498 { 00:09:04.498 "name": "BaseBdev2", 00:09:04.498 "uuid": "69058c00-9a27-56bb-8253-2cba0b75990b", 00:09:04.498 "is_configured": true, 00:09:04.498 "data_offset": 2048, 00:09:04.498 "data_size": 63488 00:09:04.498 } 00:09:04.498 ] 00:09:04.498 }' 00:09:04.498 21:36:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:04.498 21:36:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.076 21:36:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:05.076 21:36:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.076 21:36:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.076 [2024-12-10 21:36:05.631558] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:05.076 [2024-12-10 21:36:05.631608] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:05.076 [2024-12-10 21:36:05.634707] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:05.076 [2024-12-10 21:36:05.634792] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:05.076 [2024-12-10 21:36:05.634857] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:05.076 [2024-12-10 21:36:05.634912] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:05.076 { 00:09:05.076 "results": [ 00:09:05.076 { 00:09:05.076 "job": "raid_bdev1", 00:09:05.076 "core_mask": "0x1", 00:09:05.076 "workload": "randrw", 00:09:05.076 "percentage": 50, 00:09:05.076 "status": "finished", 00:09:05.076 "queue_depth": 1, 00:09:05.076 "io_size": 131072, 00:09:05.076 "runtime": 1.413692, 00:09:05.076 "iops": 14717.49150451442, 00:09:05.076 "mibps": 1839.6864380643026, 00:09:05.076 "io_failed": 1, 00:09:05.076 "io_timeout": 0, 00:09:05.076 "avg_latency_us": 93.89946001964823, 00:09:05.076 "min_latency_us": 27.276855895196505, 00:09:05.076 "max_latency_us": 1574.0087336244542 00:09:05.076 } 00:09:05.076 ], 00:09:05.076 "core_count": 1 00:09:05.076 } 00:09:05.076 21:36:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.076 21:36:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 61497 00:09:05.077 21:36:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 61497 ']' 00:09:05.077 21:36:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 61497 00:09:05.077 21:36:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:09:05.077 21:36:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:05.077 21:36:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61497 00:09:05.077 killing process with pid 61497 00:09:05.077 21:36:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:05.077 21:36:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:05.077 21:36:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61497' 00:09:05.077 21:36:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 61497 00:09:05.077 [2024-12-10 21:36:05.674606] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:05.077 21:36:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 61497 00:09:05.077 [2024-12-10 21:36:05.812804] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:06.463 21:36:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:06.463 21:36:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.lSabV6S8QZ 00:09:06.463 21:36:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:06.463 21:36:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:09:06.463 21:36:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:09:06.463 21:36:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:06.463 21:36:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:06.463 21:36:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:09:06.463 00:09:06.463 real 0m4.551s 00:09:06.463 user 0m5.496s 00:09:06.463 sys 0m0.544s 00:09:06.463 ************************************ 00:09:06.463 END TEST raid_read_error_test 00:09:06.463 ************************************ 00:09:06.463 21:36:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:06.463 21:36:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.463 21:36:07 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 2 write 00:09:06.463 21:36:07 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:06.463 21:36:07 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:06.463 21:36:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:06.463 ************************************ 00:09:06.463 START TEST raid_write_error_test 00:09:06.463 ************************************ 00:09:06.463 21:36:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 2 write 00:09:06.463 21:36:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:09:06.463 21:36:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:09:06.463 21:36:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:09:06.463 21:36:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:06.464 21:36:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:06.464 21:36:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:06.464 21:36:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:06.464 21:36:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:06.464 21:36:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:06.464 21:36:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:06.464 21:36:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:06.464 21:36:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:09:06.464 21:36:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:06.464 21:36:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:06.464 21:36:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:06.464 21:36:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:06.464 21:36:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:06.464 21:36:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:06.464 21:36:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:09:06.464 21:36:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:06.464 21:36:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:06.464 21:36:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:06.464 21:36:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.o3I4eAzpbP 00:09:06.464 21:36:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=61643 00:09:06.464 21:36:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:06.464 21:36:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 61643 00:09:06.464 21:36:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 61643 ']' 00:09:06.464 21:36:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:06.464 21:36:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:06.464 21:36:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:06.464 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:06.464 21:36:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:06.464 21:36:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.723 [2024-12-10 21:36:07.275553] Starting SPDK v25.01-pre git sha1 cec5ba284 / DPDK 24.03.0 initialization... 00:09:06.723 [2024-12-10 21:36:07.275771] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61643 ] 00:09:06.723 [2024-12-10 21:36:07.451576] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:06.981 [2024-12-10 21:36:07.574029] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:07.238 [2024-12-10 21:36:07.787858] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:07.238 [2024-12-10 21:36:07.787931] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:07.496 21:36:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:07.496 21:36:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:09:07.496 21:36:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:07.496 21:36:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:07.496 21:36:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.496 21:36:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.496 BaseBdev1_malloc 00:09:07.496 21:36:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.496 21:36:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:07.496 21:36:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.496 21:36:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.496 true 00:09:07.496 21:36:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.496 21:36:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:07.497 21:36:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.497 21:36:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.497 [2024-12-10 21:36:08.212494] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:07.497 [2024-12-10 21:36:08.212628] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:07.497 [2024-12-10 21:36:08.212659] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:07.497 [2024-12-10 21:36:08.212672] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:07.497 [2024-12-10 21:36:08.214948] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:07.497 [2024-12-10 21:36:08.214989] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:07.497 BaseBdev1 00:09:07.497 21:36:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.497 21:36:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:07.497 21:36:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:07.497 21:36:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.497 21:36:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.497 BaseBdev2_malloc 00:09:07.497 21:36:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.497 21:36:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:07.497 21:36:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.497 21:36:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.497 true 00:09:07.497 21:36:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.497 21:36:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:07.497 21:36:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.497 21:36:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.755 [2024-12-10 21:36:08.281467] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:07.755 [2024-12-10 21:36:08.281595] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:07.755 [2024-12-10 21:36:08.281620] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:07.755 [2024-12-10 21:36:08.281633] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:07.755 [2024-12-10 21:36:08.283839] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:07.755 [2024-12-10 21:36:08.283883] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:07.755 BaseBdev2 00:09:07.755 21:36:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.755 21:36:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:09:07.755 21:36:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.755 21:36:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.755 [2024-12-10 21:36:08.289505] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:07.755 [2024-12-10 21:36:08.291302] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:07.755 [2024-12-10 21:36:08.291517] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:07.755 [2024-12-10 21:36:08.291538] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:09:07.755 [2024-12-10 21:36:08.291799] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:07.755 [2024-12-10 21:36:08.291986] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:07.755 [2024-12-10 21:36:08.292000] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:07.755 [2024-12-10 21:36:08.292175] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:07.755 21:36:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.755 21:36:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:09:07.755 21:36:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:07.755 21:36:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:07.756 21:36:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:07.756 21:36:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:07.756 21:36:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:07.756 21:36:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:07.756 21:36:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:07.756 21:36:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:07.756 21:36:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:07.756 21:36:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:07.756 21:36:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.756 21:36:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:07.756 21:36:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.756 21:36:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.756 21:36:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:07.756 "name": "raid_bdev1", 00:09:07.756 "uuid": "59694a90-bf38-4bcb-b756-dc601a3ee0dd", 00:09:07.756 "strip_size_kb": 64, 00:09:07.756 "state": "online", 00:09:07.756 "raid_level": "raid0", 00:09:07.756 "superblock": true, 00:09:07.756 "num_base_bdevs": 2, 00:09:07.756 "num_base_bdevs_discovered": 2, 00:09:07.756 "num_base_bdevs_operational": 2, 00:09:07.756 "base_bdevs_list": [ 00:09:07.756 { 00:09:07.756 "name": "BaseBdev1", 00:09:07.756 "uuid": "bb8143e4-a74a-5a9a-bb9f-a7150b2511e2", 00:09:07.756 "is_configured": true, 00:09:07.756 "data_offset": 2048, 00:09:07.756 "data_size": 63488 00:09:07.756 }, 00:09:07.756 { 00:09:07.756 "name": "BaseBdev2", 00:09:07.756 "uuid": "952c829b-aa4a-5c2c-941b-f36e35d4c650", 00:09:07.756 "is_configured": true, 00:09:07.756 "data_offset": 2048, 00:09:07.756 "data_size": 63488 00:09:07.756 } 00:09:07.756 ] 00:09:07.756 }' 00:09:07.756 21:36:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:07.756 21:36:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.014 21:36:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:08.014 21:36:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:08.272 [2024-12-10 21:36:08.862265] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:09:09.207 21:36:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:09:09.207 21:36:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.207 21:36:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.207 21:36:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.207 21:36:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:09.207 21:36:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:09:09.207 21:36:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:09:09.207 21:36:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:09:09.207 21:36:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:09.207 21:36:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:09.207 21:36:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:09.207 21:36:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:09.207 21:36:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:09.207 21:36:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:09.207 21:36:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:09.207 21:36:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:09.207 21:36:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:09.207 21:36:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:09.207 21:36:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.207 21:36:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:09.207 21:36:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.207 21:36:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.207 21:36:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:09.207 "name": "raid_bdev1", 00:09:09.207 "uuid": "59694a90-bf38-4bcb-b756-dc601a3ee0dd", 00:09:09.207 "strip_size_kb": 64, 00:09:09.207 "state": "online", 00:09:09.207 "raid_level": "raid0", 00:09:09.207 "superblock": true, 00:09:09.207 "num_base_bdevs": 2, 00:09:09.207 "num_base_bdevs_discovered": 2, 00:09:09.207 "num_base_bdevs_operational": 2, 00:09:09.207 "base_bdevs_list": [ 00:09:09.207 { 00:09:09.207 "name": "BaseBdev1", 00:09:09.207 "uuid": "bb8143e4-a74a-5a9a-bb9f-a7150b2511e2", 00:09:09.207 "is_configured": true, 00:09:09.207 "data_offset": 2048, 00:09:09.207 "data_size": 63488 00:09:09.207 }, 00:09:09.207 { 00:09:09.207 "name": "BaseBdev2", 00:09:09.207 "uuid": "952c829b-aa4a-5c2c-941b-f36e35d4c650", 00:09:09.207 "is_configured": true, 00:09:09.207 "data_offset": 2048, 00:09:09.207 "data_size": 63488 00:09:09.207 } 00:09:09.207 ] 00:09:09.207 }' 00:09:09.207 21:36:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:09.207 21:36:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.774 21:36:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:09.774 21:36:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.774 21:36:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.774 [2024-12-10 21:36:10.274913] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:09.774 [2024-12-10 21:36:10.275033] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:09.774 [2024-12-10 21:36:10.277959] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:09.774 [2024-12-10 21:36:10.278044] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:09.774 [2024-12-10 21:36:10.278094] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:09.774 [2024-12-10 21:36:10.278137] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:09.774 { 00:09:09.774 "results": [ 00:09:09.774 { 00:09:09.774 "job": "raid_bdev1", 00:09:09.774 "core_mask": "0x1", 00:09:09.774 "workload": "randrw", 00:09:09.774 "percentage": 50, 00:09:09.774 "status": "finished", 00:09:09.774 "queue_depth": 1, 00:09:09.774 "io_size": 131072, 00:09:09.774 "runtime": 1.413469, 00:09:09.774 "iops": 14898.805704263765, 00:09:09.774 "mibps": 1862.3507130329706, 00:09:09.774 "io_failed": 1, 00:09:09.774 "io_timeout": 0, 00:09:09.774 "avg_latency_us": 92.88217337032475, 00:09:09.774 "min_latency_us": 27.053275109170304, 00:09:09.774 "max_latency_us": 1459.5353711790392 00:09:09.774 } 00:09:09.774 ], 00:09:09.774 "core_count": 1 00:09:09.774 } 00:09:09.774 21:36:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.774 21:36:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 61643 00:09:09.774 21:36:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 61643 ']' 00:09:09.774 21:36:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 61643 00:09:09.774 21:36:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:09:09.774 21:36:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:09.774 21:36:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61643 00:09:09.774 killing process with pid 61643 00:09:09.775 21:36:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:09.775 21:36:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:09.775 21:36:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61643' 00:09:09.775 21:36:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 61643 00:09:09.775 [2024-12-10 21:36:10.325200] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:09.775 21:36:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 61643 00:09:09.775 [2024-12-10 21:36:10.471831] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:11.150 21:36:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:11.150 21:36:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.o3I4eAzpbP 00:09:11.150 21:36:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:11.150 21:36:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:09:11.150 21:36:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:09:11.150 21:36:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:11.150 21:36:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:11.150 21:36:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:09:11.150 00:09:11.150 real 0m4.521s 00:09:11.150 user 0m5.466s 00:09:11.150 sys 0m0.575s 00:09:11.150 ************************************ 00:09:11.150 END TEST raid_write_error_test 00:09:11.150 ************************************ 00:09:11.150 21:36:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:11.150 21:36:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.150 21:36:11 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:09:11.150 21:36:11 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 2 false 00:09:11.150 21:36:11 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:11.150 21:36:11 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:11.150 21:36:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:11.150 ************************************ 00:09:11.150 START TEST raid_state_function_test 00:09:11.150 ************************************ 00:09:11.150 21:36:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 2 false 00:09:11.150 21:36:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:09:11.150 21:36:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:09:11.150 21:36:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:09:11.150 21:36:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:11.150 21:36:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:11.150 21:36:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:11.150 21:36:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:11.150 21:36:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:11.150 21:36:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:11.150 21:36:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:11.150 21:36:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:11.150 21:36:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:11.150 21:36:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:09:11.150 21:36:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:11.150 21:36:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:11.150 21:36:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:11.150 21:36:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:11.150 21:36:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:11.150 21:36:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:09:11.150 21:36:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:11.150 21:36:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:11.150 21:36:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:09:11.150 21:36:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:09:11.150 Process raid pid: 61786 00:09:11.150 21:36:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=61786 00:09:11.150 21:36:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 61786' 00:09:11.150 21:36:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:11.150 21:36:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 61786 00:09:11.150 21:36:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 61786 ']' 00:09:11.150 21:36:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:11.150 21:36:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:11.150 21:36:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:11.150 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:11.150 21:36:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:11.150 21:36:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.150 [2024-12-10 21:36:11.861691] Starting SPDK v25.01-pre git sha1 cec5ba284 / DPDK 24.03.0 initialization... 00:09:11.150 [2024-12-10 21:36:11.861885] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:11.409 [2024-12-10 21:36:12.034777] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:11.409 [2024-12-10 21:36:12.159151] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:11.667 [2024-12-10 21:36:12.362444] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:11.667 [2024-12-10 21:36:12.362508] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:12.233 21:36:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:12.233 21:36:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:09:12.233 21:36:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:12.233 21:36:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.233 21:36:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.233 [2024-12-10 21:36:12.725273] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:12.233 [2024-12-10 21:36:12.725403] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:12.233 [2024-12-10 21:36:12.725436] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:12.233 [2024-12-10 21:36:12.725449] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:12.233 21:36:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.233 21:36:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:09:12.233 21:36:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:12.233 21:36:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:12.233 21:36:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:12.233 21:36:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:12.233 21:36:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:12.233 21:36:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:12.233 21:36:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:12.233 21:36:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:12.233 21:36:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:12.233 21:36:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:12.233 21:36:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:12.233 21:36:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.233 21:36:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.233 21:36:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.233 21:36:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:12.233 "name": "Existed_Raid", 00:09:12.233 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:12.233 "strip_size_kb": 64, 00:09:12.233 "state": "configuring", 00:09:12.233 "raid_level": "concat", 00:09:12.233 "superblock": false, 00:09:12.233 "num_base_bdevs": 2, 00:09:12.233 "num_base_bdevs_discovered": 0, 00:09:12.233 "num_base_bdevs_operational": 2, 00:09:12.233 "base_bdevs_list": [ 00:09:12.233 { 00:09:12.233 "name": "BaseBdev1", 00:09:12.233 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:12.233 "is_configured": false, 00:09:12.233 "data_offset": 0, 00:09:12.233 "data_size": 0 00:09:12.233 }, 00:09:12.233 { 00:09:12.233 "name": "BaseBdev2", 00:09:12.233 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:12.233 "is_configured": false, 00:09:12.233 "data_offset": 0, 00:09:12.233 "data_size": 0 00:09:12.233 } 00:09:12.233 ] 00:09:12.233 }' 00:09:12.233 21:36:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:12.233 21:36:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.493 21:36:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:12.493 21:36:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.493 21:36:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.493 [2024-12-10 21:36:13.164579] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:12.493 [2024-12-10 21:36:13.164682] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:12.493 21:36:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.493 21:36:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:12.493 21:36:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.493 21:36:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.493 [2024-12-10 21:36:13.176571] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:12.493 [2024-12-10 21:36:13.176668] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:12.493 [2024-12-10 21:36:13.176699] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:12.493 [2024-12-10 21:36:13.176728] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:12.493 21:36:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.493 21:36:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:12.493 21:36:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.493 21:36:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.493 [2024-12-10 21:36:13.230670] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:12.493 BaseBdev1 00:09:12.493 21:36:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.493 21:36:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:12.493 21:36:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:12.493 21:36:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:12.493 21:36:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:12.493 21:36:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:12.493 21:36:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:12.493 21:36:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:12.493 21:36:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.493 21:36:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.493 21:36:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.493 21:36:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:12.493 21:36:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.493 21:36:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.493 [ 00:09:12.493 { 00:09:12.493 "name": "BaseBdev1", 00:09:12.493 "aliases": [ 00:09:12.493 "f3601bbf-4152-4ac0-b4a9-0bd919a4fee5" 00:09:12.493 ], 00:09:12.493 "product_name": "Malloc disk", 00:09:12.493 "block_size": 512, 00:09:12.493 "num_blocks": 65536, 00:09:12.493 "uuid": "f3601bbf-4152-4ac0-b4a9-0bd919a4fee5", 00:09:12.493 "assigned_rate_limits": { 00:09:12.493 "rw_ios_per_sec": 0, 00:09:12.493 "rw_mbytes_per_sec": 0, 00:09:12.493 "r_mbytes_per_sec": 0, 00:09:12.493 "w_mbytes_per_sec": 0 00:09:12.493 }, 00:09:12.493 "claimed": true, 00:09:12.493 "claim_type": "exclusive_write", 00:09:12.493 "zoned": false, 00:09:12.493 "supported_io_types": { 00:09:12.493 "read": true, 00:09:12.493 "write": true, 00:09:12.493 "unmap": true, 00:09:12.493 "flush": true, 00:09:12.493 "reset": true, 00:09:12.493 "nvme_admin": false, 00:09:12.493 "nvme_io": false, 00:09:12.493 "nvme_io_md": false, 00:09:12.493 "write_zeroes": true, 00:09:12.493 "zcopy": true, 00:09:12.493 "get_zone_info": false, 00:09:12.493 "zone_management": false, 00:09:12.493 "zone_append": false, 00:09:12.493 "compare": false, 00:09:12.493 "compare_and_write": false, 00:09:12.493 "abort": true, 00:09:12.493 "seek_hole": false, 00:09:12.493 "seek_data": false, 00:09:12.493 "copy": true, 00:09:12.493 "nvme_iov_md": false 00:09:12.493 }, 00:09:12.493 "memory_domains": [ 00:09:12.493 { 00:09:12.493 "dma_device_id": "system", 00:09:12.493 "dma_device_type": 1 00:09:12.493 }, 00:09:12.493 { 00:09:12.493 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:12.493 "dma_device_type": 2 00:09:12.493 } 00:09:12.493 ], 00:09:12.493 "driver_specific": {} 00:09:12.493 } 00:09:12.493 ] 00:09:12.493 21:36:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.493 21:36:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:12.493 21:36:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:09:12.493 21:36:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:12.493 21:36:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:12.493 21:36:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:12.493 21:36:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:12.493 21:36:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:12.493 21:36:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:12.493 21:36:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:12.493 21:36:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:12.493 21:36:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:12.752 21:36:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:12.752 21:36:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.752 21:36:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.752 21:36:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:12.752 21:36:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.752 21:36:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:12.752 "name": "Existed_Raid", 00:09:12.752 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:12.752 "strip_size_kb": 64, 00:09:12.752 "state": "configuring", 00:09:12.752 "raid_level": "concat", 00:09:12.752 "superblock": false, 00:09:12.752 "num_base_bdevs": 2, 00:09:12.752 "num_base_bdevs_discovered": 1, 00:09:12.752 "num_base_bdevs_operational": 2, 00:09:12.752 "base_bdevs_list": [ 00:09:12.752 { 00:09:12.752 "name": "BaseBdev1", 00:09:12.752 "uuid": "f3601bbf-4152-4ac0-b4a9-0bd919a4fee5", 00:09:12.752 "is_configured": true, 00:09:12.752 "data_offset": 0, 00:09:12.752 "data_size": 65536 00:09:12.752 }, 00:09:12.752 { 00:09:12.752 "name": "BaseBdev2", 00:09:12.752 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:12.752 "is_configured": false, 00:09:12.752 "data_offset": 0, 00:09:12.752 "data_size": 0 00:09:12.752 } 00:09:12.752 ] 00:09:12.752 }' 00:09:12.752 21:36:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:12.752 21:36:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.010 21:36:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:13.010 21:36:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.010 21:36:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.010 [2024-12-10 21:36:13.701969] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:13.010 [2024-12-10 21:36:13.702070] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:13.010 21:36:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.010 21:36:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:13.010 21:36:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.010 21:36:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.010 [2024-12-10 21:36:13.713982] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:13.010 [2024-12-10 21:36:13.715995] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:13.010 [2024-12-10 21:36:13.716082] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:13.010 21:36:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.010 21:36:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:13.010 21:36:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:13.010 21:36:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:09:13.010 21:36:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:13.010 21:36:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:13.010 21:36:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:13.010 21:36:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:13.010 21:36:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:13.010 21:36:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:13.010 21:36:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:13.010 21:36:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:13.010 21:36:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:13.010 21:36:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:13.010 21:36:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.010 21:36:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.010 21:36:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:13.010 21:36:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.010 21:36:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:13.010 "name": "Existed_Raid", 00:09:13.010 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:13.010 "strip_size_kb": 64, 00:09:13.010 "state": "configuring", 00:09:13.010 "raid_level": "concat", 00:09:13.010 "superblock": false, 00:09:13.010 "num_base_bdevs": 2, 00:09:13.010 "num_base_bdevs_discovered": 1, 00:09:13.010 "num_base_bdevs_operational": 2, 00:09:13.010 "base_bdevs_list": [ 00:09:13.010 { 00:09:13.010 "name": "BaseBdev1", 00:09:13.010 "uuid": "f3601bbf-4152-4ac0-b4a9-0bd919a4fee5", 00:09:13.010 "is_configured": true, 00:09:13.010 "data_offset": 0, 00:09:13.010 "data_size": 65536 00:09:13.010 }, 00:09:13.010 { 00:09:13.010 "name": "BaseBdev2", 00:09:13.010 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:13.010 "is_configured": false, 00:09:13.010 "data_offset": 0, 00:09:13.010 "data_size": 0 00:09:13.010 } 00:09:13.010 ] 00:09:13.010 }' 00:09:13.010 21:36:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:13.010 21:36:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.578 21:36:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:13.578 21:36:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.578 21:36:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.578 [2024-12-10 21:36:14.192095] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:13.578 [2024-12-10 21:36:14.192222] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:13.578 [2024-12-10 21:36:14.192247] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:09:13.578 [2024-12-10 21:36:14.192619] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:13.578 [2024-12-10 21:36:14.192853] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:13.578 [2024-12-10 21:36:14.192902] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:13.578 [2024-12-10 21:36:14.193256] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:13.578 BaseBdev2 00:09:13.578 21:36:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.578 21:36:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:13.578 21:36:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:13.578 21:36:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:13.578 21:36:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:13.578 21:36:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:13.578 21:36:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:13.578 21:36:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:13.578 21:36:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.578 21:36:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.578 21:36:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.578 21:36:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:13.578 21:36:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.578 21:36:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.578 [ 00:09:13.578 { 00:09:13.578 "name": "BaseBdev2", 00:09:13.578 "aliases": [ 00:09:13.579 "f056a021-cad4-4dba-a7dd-a21866ad3843" 00:09:13.579 ], 00:09:13.579 "product_name": "Malloc disk", 00:09:13.579 "block_size": 512, 00:09:13.579 "num_blocks": 65536, 00:09:13.579 "uuid": "f056a021-cad4-4dba-a7dd-a21866ad3843", 00:09:13.579 "assigned_rate_limits": { 00:09:13.579 "rw_ios_per_sec": 0, 00:09:13.579 "rw_mbytes_per_sec": 0, 00:09:13.579 "r_mbytes_per_sec": 0, 00:09:13.579 "w_mbytes_per_sec": 0 00:09:13.579 }, 00:09:13.579 "claimed": true, 00:09:13.579 "claim_type": "exclusive_write", 00:09:13.579 "zoned": false, 00:09:13.579 "supported_io_types": { 00:09:13.579 "read": true, 00:09:13.579 "write": true, 00:09:13.579 "unmap": true, 00:09:13.579 "flush": true, 00:09:13.579 "reset": true, 00:09:13.579 "nvme_admin": false, 00:09:13.579 "nvme_io": false, 00:09:13.579 "nvme_io_md": false, 00:09:13.579 "write_zeroes": true, 00:09:13.579 "zcopy": true, 00:09:13.579 "get_zone_info": false, 00:09:13.579 "zone_management": false, 00:09:13.579 "zone_append": false, 00:09:13.579 "compare": false, 00:09:13.579 "compare_and_write": false, 00:09:13.579 "abort": true, 00:09:13.579 "seek_hole": false, 00:09:13.579 "seek_data": false, 00:09:13.579 "copy": true, 00:09:13.579 "nvme_iov_md": false 00:09:13.579 }, 00:09:13.579 "memory_domains": [ 00:09:13.579 { 00:09:13.579 "dma_device_id": "system", 00:09:13.579 "dma_device_type": 1 00:09:13.579 }, 00:09:13.579 { 00:09:13.579 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:13.579 "dma_device_type": 2 00:09:13.579 } 00:09:13.579 ], 00:09:13.579 "driver_specific": {} 00:09:13.579 } 00:09:13.579 ] 00:09:13.579 21:36:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.579 21:36:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:13.579 21:36:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:13.579 21:36:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:13.579 21:36:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:09:13.579 21:36:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:13.579 21:36:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:13.579 21:36:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:13.579 21:36:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:13.579 21:36:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:13.579 21:36:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:13.579 21:36:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:13.579 21:36:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:13.579 21:36:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:13.579 21:36:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:13.579 21:36:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:13.579 21:36:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.579 21:36:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.579 21:36:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.579 21:36:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:13.579 "name": "Existed_Raid", 00:09:13.579 "uuid": "565873fc-cbf1-4d25-99f6-111d183ec315", 00:09:13.579 "strip_size_kb": 64, 00:09:13.579 "state": "online", 00:09:13.579 "raid_level": "concat", 00:09:13.579 "superblock": false, 00:09:13.579 "num_base_bdevs": 2, 00:09:13.579 "num_base_bdevs_discovered": 2, 00:09:13.579 "num_base_bdevs_operational": 2, 00:09:13.579 "base_bdevs_list": [ 00:09:13.579 { 00:09:13.579 "name": "BaseBdev1", 00:09:13.579 "uuid": "f3601bbf-4152-4ac0-b4a9-0bd919a4fee5", 00:09:13.579 "is_configured": true, 00:09:13.579 "data_offset": 0, 00:09:13.579 "data_size": 65536 00:09:13.579 }, 00:09:13.579 { 00:09:13.579 "name": "BaseBdev2", 00:09:13.579 "uuid": "f056a021-cad4-4dba-a7dd-a21866ad3843", 00:09:13.579 "is_configured": true, 00:09:13.579 "data_offset": 0, 00:09:13.579 "data_size": 65536 00:09:13.579 } 00:09:13.579 ] 00:09:13.579 }' 00:09:13.579 21:36:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:13.579 21:36:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.147 21:36:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:14.147 21:36:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:14.147 21:36:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:14.147 21:36:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:14.147 21:36:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:14.147 21:36:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:14.147 21:36:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:14.147 21:36:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:14.147 21:36:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.147 21:36:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.147 [2024-12-10 21:36:14.683661] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:14.147 21:36:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.147 21:36:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:14.147 "name": "Existed_Raid", 00:09:14.147 "aliases": [ 00:09:14.147 "565873fc-cbf1-4d25-99f6-111d183ec315" 00:09:14.147 ], 00:09:14.147 "product_name": "Raid Volume", 00:09:14.147 "block_size": 512, 00:09:14.147 "num_blocks": 131072, 00:09:14.147 "uuid": "565873fc-cbf1-4d25-99f6-111d183ec315", 00:09:14.147 "assigned_rate_limits": { 00:09:14.147 "rw_ios_per_sec": 0, 00:09:14.147 "rw_mbytes_per_sec": 0, 00:09:14.147 "r_mbytes_per_sec": 0, 00:09:14.147 "w_mbytes_per_sec": 0 00:09:14.147 }, 00:09:14.147 "claimed": false, 00:09:14.147 "zoned": false, 00:09:14.147 "supported_io_types": { 00:09:14.147 "read": true, 00:09:14.147 "write": true, 00:09:14.147 "unmap": true, 00:09:14.147 "flush": true, 00:09:14.147 "reset": true, 00:09:14.147 "nvme_admin": false, 00:09:14.147 "nvme_io": false, 00:09:14.147 "nvme_io_md": false, 00:09:14.147 "write_zeroes": true, 00:09:14.147 "zcopy": false, 00:09:14.147 "get_zone_info": false, 00:09:14.147 "zone_management": false, 00:09:14.147 "zone_append": false, 00:09:14.147 "compare": false, 00:09:14.147 "compare_and_write": false, 00:09:14.147 "abort": false, 00:09:14.147 "seek_hole": false, 00:09:14.147 "seek_data": false, 00:09:14.147 "copy": false, 00:09:14.147 "nvme_iov_md": false 00:09:14.147 }, 00:09:14.147 "memory_domains": [ 00:09:14.147 { 00:09:14.147 "dma_device_id": "system", 00:09:14.147 "dma_device_type": 1 00:09:14.147 }, 00:09:14.147 { 00:09:14.147 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:14.147 "dma_device_type": 2 00:09:14.147 }, 00:09:14.147 { 00:09:14.147 "dma_device_id": "system", 00:09:14.147 "dma_device_type": 1 00:09:14.147 }, 00:09:14.147 { 00:09:14.147 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:14.147 "dma_device_type": 2 00:09:14.147 } 00:09:14.147 ], 00:09:14.147 "driver_specific": { 00:09:14.147 "raid": { 00:09:14.147 "uuid": "565873fc-cbf1-4d25-99f6-111d183ec315", 00:09:14.147 "strip_size_kb": 64, 00:09:14.147 "state": "online", 00:09:14.147 "raid_level": "concat", 00:09:14.147 "superblock": false, 00:09:14.147 "num_base_bdevs": 2, 00:09:14.147 "num_base_bdevs_discovered": 2, 00:09:14.147 "num_base_bdevs_operational": 2, 00:09:14.147 "base_bdevs_list": [ 00:09:14.147 { 00:09:14.147 "name": "BaseBdev1", 00:09:14.147 "uuid": "f3601bbf-4152-4ac0-b4a9-0bd919a4fee5", 00:09:14.147 "is_configured": true, 00:09:14.147 "data_offset": 0, 00:09:14.147 "data_size": 65536 00:09:14.147 }, 00:09:14.147 { 00:09:14.147 "name": "BaseBdev2", 00:09:14.147 "uuid": "f056a021-cad4-4dba-a7dd-a21866ad3843", 00:09:14.147 "is_configured": true, 00:09:14.147 "data_offset": 0, 00:09:14.147 "data_size": 65536 00:09:14.147 } 00:09:14.147 ] 00:09:14.147 } 00:09:14.147 } 00:09:14.147 }' 00:09:14.148 21:36:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:14.148 21:36:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:14.148 BaseBdev2' 00:09:14.148 21:36:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:14.148 21:36:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:14.148 21:36:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:14.148 21:36:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:14.148 21:36:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:14.148 21:36:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.148 21:36:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.148 21:36:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.148 21:36:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:14.148 21:36:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:14.148 21:36:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:14.148 21:36:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:14.148 21:36:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.148 21:36:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.148 21:36:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:14.148 21:36:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.148 21:36:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:14.148 21:36:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:14.148 21:36:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:14.148 21:36:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.148 21:36:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.148 [2024-12-10 21:36:14.895032] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:14.148 [2024-12-10 21:36:14.895148] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:14.148 [2024-12-10 21:36:14.895211] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:14.407 21:36:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.407 21:36:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:14.407 21:36:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:09:14.407 21:36:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:14.407 21:36:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:14.407 21:36:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:14.407 21:36:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:09:14.407 21:36:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:14.407 21:36:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:14.407 21:36:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:14.407 21:36:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:14.407 21:36:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:09:14.407 21:36:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:14.407 21:36:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:14.407 21:36:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:14.407 21:36:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:14.407 21:36:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:14.407 21:36:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:14.407 21:36:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.407 21:36:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.407 21:36:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.407 21:36:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:14.407 "name": "Existed_Raid", 00:09:14.407 "uuid": "565873fc-cbf1-4d25-99f6-111d183ec315", 00:09:14.407 "strip_size_kb": 64, 00:09:14.407 "state": "offline", 00:09:14.407 "raid_level": "concat", 00:09:14.407 "superblock": false, 00:09:14.407 "num_base_bdevs": 2, 00:09:14.407 "num_base_bdevs_discovered": 1, 00:09:14.407 "num_base_bdevs_operational": 1, 00:09:14.407 "base_bdevs_list": [ 00:09:14.407 { 00:09:14.407 "name": null, 00:09:14.407 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:14.407 "is_configured": false, 00:09:14.407 "data_offset": 0, 00:09:14.407 "data_size": 65536 00:09:14.407 }, 00:09:14.407 { 00:09:14.407 "name": "BaseBdev2", 00:09:14.407 "uuid": "f056a021-cad4-4dba-a7dd-a21866ad3843", 00:09:14.407 "is_configured": true, 00:09:14.407 "data_offset": 0, 00:09:14.407 "data_size": 65536 00:09:14.407 } 00:09:14.407 ] 00:09:14.407 }' 00:09:14.407 21:36:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:14.407 21:36:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.665 21:36:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:14.665 21:36:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:14.665 21:36:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:14.665 21:36:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:14.665 21:36:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.665 21:36:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.924 21:36:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.924 21:36:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:14.924 21:36:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:14.924 21:36:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:14.924 21:36:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.924 21:36:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.924 [2024-12-10 21:36:15.495603] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:14.924 [2024-12-10 21:36:15.495663] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:14.924 21:36:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.924 21:36:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:14.924 21:36:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:14.924 21:36:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:14.924 21:36:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.924 21:36:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:14.924 21:36:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.924 21:36:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.924 21:36:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:14.924 21:36:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:14.924 21:36:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:09:14.924 21:36:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 61786 00:09:14.924 21:36:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 61786 ']' 00:09:14.924 21:36:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 61786 00:09:14.924 21:36:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:09:14.924 21:36:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:14.924 21:36:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61786 00:09:14.924 21:36:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:14.924 21:36:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:14.924 21:36:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61786' 00:09:14.924 killing process with pid 61786 00:09:14.924 21:36:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 61786 00:09:14.925 [2024-12-10 21:36:15.695491] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:14.925 21:36:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 61786 00:09:15.183 [2024-12-10 21:36:15.716092] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:16.131 21:36:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:09:16.131 ************************************ 00:09:16.131 END TEST raid_state_function_test 00:09:16.131 ************************************ 00:09:16.131 00:09:16.131 real 0m5.142s 00:09:16.131 user 0m7.391s 00:09:16.131 sys 0m0.806s 00:09:16.131 21:36:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:16.131 21:36:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.390 21:36:16 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 2 true 00:09:16.390 21:36:16 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:16.390 21:36:16 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:16.390 21:36:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:16.390 ************************************ 00:09:16.390 START TEST raid_state_function_test_sb 00:09:16.390 ************************************ 00:09:16.390 21:36:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 2 true 00:09:16.390 21:36:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:09:16.390 21:36:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:09:16.390 21:36:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:09:16.390 21:36:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:16.390 21:36:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:16.390 21:36:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:16.390 21:36:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:16.390 21:36:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:16.390 21:36:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:16.390 21:36:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:16.390 21:36:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:16.390 21:36:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:16.390 21:36:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:09:16.390 21:36:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:16.390 21:36:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:16.390 21:36:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:16.390 21:36:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:16.390 21:36:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:16.390 21:36:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:09:16.390 21:36:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:16.390 21:36:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:16.390 21:36:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:09:16.390 21:36:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:09:16.390 21:36:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=62039 00:09:16.390 Process raid pid: 62039 00:09:16.390 21:36:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:16.390 21:36:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 62039' 00:09:16.390 21:36:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 62039 00:09:16.391 21:36:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 62039 ']' 00:09:16.391 21:36:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:16.391 21:36:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:16.391 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:16.391 21:36:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:16.391 21:36:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:16.391 21:36:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:16.391 [2024-12-10 21:36:17.068436] Starting SPDK v25.01-pre git sha1 cec5ba284 / DPDK 24.03.0 initialization... 00:09:16.391 [2024-12-10 21:36:17.068570] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:16.649 [2024-12-10 21:36:17.246107] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:16.649 [2024-12-10 21:36:17.375556] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:16.907 [2024-12-10 21:36:17.592379] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:16.907 [2024-12-10 21:36:17.592443] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:17.166 21:36:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:17.166 21:36:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:09:17.166 21:36:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:17.166 21:36:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.166 21:36:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:17.166 [2024-12-10 21:36:17.931654] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:17.166 [2024-12-10 21:36:17.931709] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:17.166 [2024-12-10 21:36:17.931720] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:17.166 [2024-12-10 21:36:17.931747] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:17.166 21:36:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.166 21:36:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:09:17.166 21:36:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:17.166 21:36:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:17.166 21:36:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:17.166 21:36:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:17.166 21:36:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:17.166 21:36:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:17.167 21:36:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:17.167 21:36:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:17.167 21:36:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:17.167 21:36:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:17.167 21:36:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:17.167 21:36:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.167 21:36:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:17.425 21:36:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.425 21:36:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:17.425 "name": "Existed_Raid", 00:09:17.425 "uuid": "fe19a98e-88d5-4ed5-9911-813b4793b3ad", 00:09:17.425 "strip_size_kb": 64, 00:09:17.425 "state": "configuring", 00:09:17.425 "raid_level": "concat", 00:09:17.425 "superblock": true, 00:09:17.425 "num_base_bdevs": 2, 00:09:17.425 "num_base_bdevs_discovered": 0, 00:09:17.425 "num_base_bdevs_operational": 2, 00:09:17.425 "base_bdevs_list": [ 00:09:17.425 { 00:09:17.425 "name": "BaseBdev1", 00:09:17.425 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:17.425 "is_configured": false, 00:09:17.425 "data_offset": 0, 00:09:17.425 "data_size": 0 00:09:17.425 }, 00:09:17.425 { 00:09:17.425 "name": "BaseBdev2", 00:09:17.425 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:17.425 "is_configured": false, 00:09:17.425 "data_offset": 0, 00:09:17.426 "data_size": 0 00:09:17.426 } 00:09:17.426 ] 00:09:17.426 }' 00:09:17.426 21:36:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:17.426 21:36:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:17.685 21:36:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:17.685 21:36:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.685 21:36:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:17.685 [2024-12-10 21:36:18.426752] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:17.685 [2024-12-10 21:36:18.426883] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:17.685 21:36:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.685 21:36:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:17.685 21:36:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.685 21:36:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:17.685 [2024-12-10 21:36:18.438789] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:17.685 [2024-12-10 21:36:18.438853] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:17.685 [2024-12-10 21:36:18.438863] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:17.685 [2024-12-10 21:36:18.438876] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:17.685 21:36:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.685 21:36:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:17.685 21:36:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.685 21:36:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:17.943 [2024-12-10 21:36:18.485473] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:17.943 BaseBdev1 00:09:17.943 21:36:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.943 21:36:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:17.943 21:36:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:17.943 21:36:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:17.943 21:36:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:17.943 21:36:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:17.944 21:36:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:17.944 21:36:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:17.944 21:36:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.944 21:36:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:17.944 21:36:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.944 21:36:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:17.944 21:36:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.944 21:36:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:17.944 [ 00:09:17.944 { 00:09:17.944 "name": "BaseBdev1", 00:09:17.944 "aliases": [ 00:09:17.944 "c58cc06a-166b-48ae-bbb1-925afa8e5efb" 00:09:17.944 ], 00:09:17.944 "product_name": "Malloc disk", 00:09:17.944 "block_size": 512, 00:09:17.944 "num_blocks": 65536, 00:09:17.944 "uuid": "c58cc06a-166b-48ae-bbb1-925afa8e5efb", 00:09:17.944 "assigned_rate_limits": { 00:09:17.944 "rw_ios_per_sec": 0, 00:09:17.944 "rw_mbytes_per_sec": 0, 00:09:17.944 "r_mbytes_per_sec": 0, 00:09:17.944 "w_mbytes_per_sec": 0 00:09:17.944 }, 00:09:17.944 "claimed": true, 00:09:17.944 "claim_type": "exclusive_write", 00:09:17.944 "zoned": false, 00:09:17.944 "supported_io_types": { 00:09:17.944 "read": true, 00:09:17.944 "write": true, 00:09:17.944 "unmap": true, 00:09:17.944 "flush": true, 00:09:17.944 "reset": true, 00:09:17.944 "nvme_admin": false, 00:09:17.944 "nvme_io": false, 00:09:17.944 "nvme_io_md": false, 00:09:17.944 "write_zeroes": true, 00:09:17.944 "zcopy": true, 00:09:17.944 "get_zone_info": false, 00:09:17.944 "zone_management": false, 00:09:17.944 "zone_append": false, 00:09:17.944 "compare": false, 00:09:17.944 "compare_and_write": false, 00:09:17.944 "abort": true, 00:09:17.944 "seek_hole": false, 00:09:17.944 "seek_data": false, 00:09:17.944 "copy": true, 00:09:17.944 "nvme_iov_md": false 00:09:17.944 }, 00:09:17.944 "memory_domains": [ 00:09:17.944 { 00:09:17.944 "dma_device_id": "system", 00:09:17.944 "dma_device_type": 1 00:09:17.944 }, 00:09:17.944 { 00:09:17.944 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:17.944 "dma_device_type": 2 00:09:17.944 } 00:09:17.944 ], 00:09:17.944 "driver_specific": {} 00:09:17.944 } 00:09:17.944 ] 00:09:17.944 21:36:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.944 21:36:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:17.944 21:36:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:09:17.944 21:36:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:17.944 21:36:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:17.944 21:36:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:17.944 21:36:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:17.944 21:36:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:17.944 21:36:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:17.944 21:36:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:17.944 21:36:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:17.944 21:36:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:17.944 21:36:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:17.944 21:36:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.944 21:36:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:17.944 21:36:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:17.944 21:36:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.944 21:36:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:17.944 "name": "Existed_Raid", 00:09:17.944 "uuid": "2374eab7-7627-4b27-8991-e82bc1dbc0de", 00:09:17.944 "strip_size_kb": 64, 00:09:17.944 "state": "configuring", 00:09:17.944 "raid_level": "concat", 00:09:17.944 "superblock": true, 00:09:17.944 "num_base_bdevs": 2, 00:09:17.944 "num_base_bdevs_discovered": 1, 00:09:17.944 "num_base_bdevs_operational": 2, 00:09:17.944 "base_bdevs_list": [ 00:09:17.944 { 00:09:17.944 "name": "BaseBdev1", 00:09:17.944 "uuid": "c58cc06a-166b-48ae-bbb1-925afa8e5efb", 00:09:17.944 "is_configured": true, 00:09:17.944 "data_offset": 2048, 00:09:17.944 "data_size": 63488 00:09:17.944 }, 00:09:17.944 { 00:09:17.944 "name": "BaseBdev2", 00:09:17.944 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:17.944 "is_configured": false, 00:09:17.944 "data_offset": 0, 00:09:17.944 "data_size": 0 00:09:17.944 } 00:09:17.944 ] 00:09:17.944 }' 00:09:17.944 21:36:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:17.944 21:36:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.511 21:36:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:18.511 21:36:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.511 21:36:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.511 [2024-12-10 21:36:18.992677] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:18.511 [2024-12-10 21:36:18.992743] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:18.511 21:36:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.511 21:36:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:18.511 21:36:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.511 21:36:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.511 [2024-12-10 21:36:19.000700] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:18.511 [2024-12-10 21:36:19.002848] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:18.511 [2024-12-10 21:36:19.002937] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:18.511 21:36:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.511 21:36:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:18.511 21:36:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:18.511 21:36:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:09:18.511 21:36:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:18.511 21:36:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:18.511 21:36:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:18.511 21:36:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:18.511 21:36:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:18.511 21:36:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:18.511 21:36:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:18.511 21:36:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:18.511 21:36:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:18.511 21:36:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:18.511 21:36:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:18.511 21:36:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.511 21:36:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.511 21:36:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.511 21:36:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:18.511 "name": "Existed_Raid", 00:09:18.511 "uuid": "018eb140-a2bc-4013-9f87-a119662264fa", 00:09:18.511 "strip_size_kb": 64, 00:09:18.511 "state": "configuring", 00:09:18.511 "raid_level": "concat", 00:09:18.511 "superblock": true, 00:09:18.511 "num_base_bdevs": 2, 00:09:18.511 "num_base_bdevs_discovered": 1, 00:09:18.511 "num_base_bdevs_operational": 2, 00:09:18.511 "base_bdevs_list": [ 00:09:18.511 { 00:09:18.511 "name": "BaseBdev1", 00:09:18.511 "uuid": "c58cc06a-166b-48ae-bbb1-925afa8e5efb", 00:09:18.511 "is_configured": true, 00:09:18.511 "data_offset": 2048, 00:09:18.511 "data_size": 63488 00:09:18.511 }, 00:09:18.511 { 00:09:18.511 "name": "BaseBdev2", 00:09:18.511 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:18.511 "is_configured": false, 00:09:18.511 "data_offset": 0, 00:09:18.511 "data_size": 0 00:09:18.511 } 00:09:18.511 ] 00:09:18.511 }' 00:09:18.511 21:36:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:18.511 21:36:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.770 21:36:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:18.770 21:36:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.770 21:36:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.770 [2024-12-10 21:36:19.525893] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:18.770 [2024-12-10 21:36:19.526321] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:18.770 [2024-12-10 21:36:19.526380] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:09:18.770 [2024-12-10 21:36:19.526709] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:18.770 BaseBdev2 00:09:18.770 [2024-12-10 21:36:19.526930] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:18.770 [2024-12-10 21:36:19.526948] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:18.770 [2024-12-10 21:36:19.527105] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:18.770 21:36:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.770 21:36:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:18.771 21:36:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:18.771 21:36:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:18.771 21:36:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:18.771 21:36:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:18.771 21:36:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:18.771 21:36:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:18.771 21:36:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.771 21:36:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.771 21:36:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.771 21:36:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:18.771 21:36:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.771 21:36:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.771 [ 00:09:18.771 { 00:09:18.771 "name": "BaseBdev2", 00:09:18.771 "aliases": [ 00:09:18.771 "11ef8fd4-ea0c-4b87-ada4-6b9414cbf015" 00:09:18.771 ], 00:09:18.771 "product_name": "Malloc disk", 00:09:18.771 "block_size": 512, 00:09:18.771 "num_blocks": 65536, 00:09:18.771 "uuid": "11ef8fd4-ea0c-4b87-ada4-6b9414cbf015", 00:09:18.771 "assigned_rate_limits": { 00:09:18.771 "rw_ios_per_sec": 0, 00:09:18.771 "rw_mbytes_per_sec": 0, 00:09:18.771 "r_mbytes_per_sec": 0, 00:09:18.771 "w_mbytes_per_sec": 0 00:09:18.771 }, 00:09:18.771 "claimed": true, 00:09:18.771 "claim_type": "exclusive_write", 00:09:18.771 "zoned": false, 00:09:18.771 "supported_io_types": { 00:09:18.771 "read": true, 00:09:18.771 "write": true, 00:09:18.771 "unmap": true, 00:09:18.771 "flush": true, 00:09:18.771 "reset": true, 00:09:18.771 "nvme_admin": false, 00:09:19.029 "nvme_io": false, 00:09:19.029 "nvme_io_md": false, 00:09:19.029 "write_zeroes": true, 00:09:19.029 "zcopy": true, 00:09:19.029 "get_zone_info": false, 00:09:19.029 "zone_management": false, 00:09:19.029 "zone_append": false, 00:09:19.029 "compare": false, 00:09:19.029 "compare_and_write": false, 00:09:19.029 "abort": true, 00:09:19.029 "seek_hole": false, 00:09:19.029 "seek_data": false, 00:09:19.029 "copy": true, 00:09:19.029 "nvme_iov_md": false 00:09:19.029 }, 00:09:19.029 "memory_domains": [ 00:09:19.029 { 00:09:19.029 "dma_device_id": "system", 00:09:19.029 "dma_device_type": 1 00:09:19.029 }, 00:09:19.029 { 00:09:19.030 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:19.030 "dma_device_type": 2 00:09:19.030 } 00:09:19.030 ], 00:09:19.030 "driver_specific": {} 00:09:19.030 } 00:09:19.030 ] 00:09:19.030 21:36:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.030 21:36:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:19.030 21:36:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:19.030 21:36:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:19.030 21:36:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:09:19.030 21:36:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:19.030 21:36:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:19.030 21:36:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:19.030 21:36:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:19.030 21:36:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:19.030 21:36:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:19.030 21:36:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:19.030 21:36:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:19.030 21:36:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:19.030 21:36:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:19.030 21:36:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.030 21:36:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:19.030 21:36:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:19.030 21:36:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.030 21:36:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:19.030 "name": "Existed_Raid", 00:09:19.030 "uuid": "018eb140-a2bc-4013-9f87-a119662264fa", 00:09:19.030 "strip_size_kb": 64, 00:09:19.030 "state": "online", 00:09:19.030 "raid_level": "concat", 00:09:19.030 "superblock": true, 00:09:19.030 "num_base_bdevs": 2, 00:09:19.030 "num_base_bdevs_discovered": 2, 00:09:19.030 "num_base_bdevs_operational": 2, 00:09:19.030 "base_bdevs_list": [ 00:09:19.030 { 00:09:19.030 "name": "BaseBdev1", 00:09:19.030 "uuid": "c58cc06a-166b-48ae-bbb1-925afa8e5efb", 00:09:19.030 "is_configured": true, 00:09:19.030 "data_offset": 2048, 00:09:19.030 "data_size": 63488 00:09:19.030 }, 00:09:19.030 { 00:09:19.030 "name": "BaseBdev2", 00:09:19.030 "uuid": "11ef8fd4-ea0c-4b87-ada4-6b9414cbf015", 00:09:19.030 "is_configured": true, 00:09:19.030 "data_offset": 2048, 00:09:19.030 "data_size": 63488 00:09:19.030 } 00:09:19.030 ] 00:09:19.030 }' 00:09:19.030 21:36:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:19.030 21:36:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:19.289 21:36:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:19.289 21:36:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:19.289 21:36:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:19.289 21:36:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:19.289 21:36:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:19.289 21:36:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:19.289 21:36:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:19.289 21:36:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:19.289 21:36:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.289 21:36:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:19.289 [2024-12-10 21:36:20.037402] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:19.289 21:36:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.547 21:36:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:19.547 "name": "Existed_Raid", 00:09:19.547 "aliases": [ 00:09:19.547 "018eb140-a2bc-4013-9f87-a119662264fa" 00:09:19.547 ], 00:09:19.547 "product_name": "Raid Volume", 00:09:19.547 "block_size": 512, 00:09:19.547 "num_blocks": 126976, 00:09:19.547 "uuid": "018eb140-a2bc-4013-9f87-a119662264fa", 00:09:19.547 "assigned_rate_limits": { 00:09:19.547 "rw_ios_per_sec": 0, 00:09:19.547 "rw_mbytes_per_sec": 0, 00:09:19.547 "r_mbytes_per_sec": 0, 00:09:19.547 "w_mbytes_per_sec": 0 00:09:19.547 }, 00:09:19.547 "claimed": false, 00:09:19.547 "zoned": false, 00:09:19.547 "supported_io_types": { 00:09:19.547 "read": true, 00:09:19.547 "write": true, 00:09:19.547 "unmap": true, 00:09:19.547 "flush": true, 00:09:19.547 "reset": true, 00:09:19.547 "nvme_admin": false, 00:09:19.547 "nvme_io": false, 00:09:19.547 "nvme_io_md": false, 00:09:19.547 "write_zeroes": true, 00:09:19.547 "zcopy": false, 00:09:19.547 "get_zone_info": false, 00:09:19.547 "zone_management": false, 00:09:19.547 "zone_append": false, 00:09:19.547 "compare": false, 00:09:19.547 "compare_and_write": false, 00:09:19.547 "abort": false, 00:09:19.547 "seek_hole": false, 00:09:19.547 "seek_data": false, 00:09:19.547 "copy": false, 00:09:19.547 "nvme_iov_md": false 00:09:19.547 }, 00:09:19.547 "memory_domains": [ 00:09:19.547 { 00:09:19.547 "dma_device_id": "system", 00:09:19.547 "dma_device_type": 1 00:09:19.547 }, 00:09:19.547 { 00:09:19.547 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:19.547 "dma_device_type": 2 00:09:19.547 }, 00:09:19.547 { 00:09:19.547 "dma_device_id": "system", 00:09:19.547 "dma_device_type": 1 00:09:19.547 }, 00:09:19.547 { 00:09:19.547 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:19.547 "dma_device_type": 2 00:09:19.547 } 00:09:19.547 ], 00:09:19.547 "driver_specific": { 00:09:19.547 "raid": { 00:09:19.547 "uuid": "018eb140-a2bc-4013-9f87-a119662264fa", 00:09:19.548 "strip_size_kb": 64, 00:09:19.548 "state": "online", 00:09:19.548 "raid_level": "concat", 00:09:19.548 "superblock": true, 00:09:19.548 "num_base_bdevs": 2, 00:09:19.548 "num_base_bdevs_discovered": 2, 00:09:19.548 "num_base_bdevs_operational": 2, 00:09:19.548 "base_bdevs_list": [ 00:09:19.548 { 00:09:19.548 "name": "BaseBdev1", 00:09:19.548 "uuid": "c58cc06a-166b-48ae-bbb1-925afa8e5efb", 00:09:19.548 "is_configured": true, 00:09:19.548 "data_offset": 2048, 00:09:19.548 "data_size": 63488 00:09:19.548 }, 00:09:19.548 { 00:09:19.548 "name": "BaseBdev2", 00:09:19.548 "uuid": "11ef8fd4-ea0c-4b87-ada4-6b9414cbf015", 00:09:19.548 "is_configured": true, 00:09:19.548 "data_offset": 2048, 00:09:19.548 "data_size": 63488 00:09:19.548 } 00:09:19.548 ] 00:09:19.548 } 00:09:19.548 } 00:09:19.548 }' 00:09:19.548 21:36:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:19.548 21:36:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:19.548 BaseBdev2' 00:09:19.548 21:36:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:19.548 21:36:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:19.548 21:36:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:19.548 21:36:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:19.548 21:36:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:19.548 21:36:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.548 21:36:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:19.548 21:36:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.548 21:36:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:19.548 21:36:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:19.548 21:36:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:19.548 21:36:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:19.548 21:36:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:19.548 21:36:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.548 21:36:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:19.548 21:36:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.548 21:36:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:19.548 21:36:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:19.548 21:36:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:19.548 21:36:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.548 21:36:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:19.548 [2024-12-10 21:36:20.264795] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:19.548 [2024-12-10 21:36:20.264849] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:19.548 [2024-12-10 21:36:20.264908] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:19.807 21:36:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.807 21:36:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:19.807 21:36:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:09:19.807 21:36:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:19.807 21:36:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:09:19.807 21:36:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:19.807 21:36:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:09:19.807 21:36:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:19.807 21:36:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:19.807 21:36:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:19.807 21:36:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:19.807 21:36:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:09:19.807 21:36:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:19.807 21:36:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:19.807 21:36:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:19.807 21:36:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:19.807 21:36:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:19.807 21:36:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:19.807 21:36:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.807 21:36:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:19.807 21:36:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.807 21:36:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:19.807 "name": "Existed_Raid", 00:09:19.807 "uuid": "018eb140-a2bc-4013-9f87-a119662264fa", 00:09:19.807 "strip_size_kb": 64, 00:09:19.807 "state": "offline", 00:09:19.807 "raid_level": "concat", 00:09:19.807 "superblock": true, 00:09:19.807 "num_base_bdevs": 2, 00:09:19.807 "num_base_bdevs_discovered": 1, 00:09:19.807 "num_base_bdevs_operational": 1, 00:09:19.807 "base_bdevs_list": [ 00:09:19.807 { 00:09:19.807 "name": null, 00:09:19.807 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:19.807 "is_configured": false, 00:09:19.807 "data_offset": 0, 00:09:19.807 "data_size": 63488 00:09:19.807 }, 00:09:19.807 { 00:09:19.807 "name": "BaseBdev2", 00:09:19.807 "uuid": "11ef8fd4-ea0c-4b87-ada4-6b9414cbf015", 00:09:19.807 "is_configured": true, 00:09:19.807 "data_offset": 2048, 00:09:19.807 "data_size": 63488 00:09:19.807 } 00:09:19.807 ] 00:09:19.807 }' 00:09:19.807 21:36:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:19.807 21:36:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.065 21:36:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:20.065 21:36:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:20.065 21:36:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:20.065 21:36:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.324 21:36:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.324 21:36:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:20.324 21:36:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.324 21:36:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:20.324 21:36:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:20.324 21:36:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:20.324 21:36:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.324 21:36:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.324 [2024-12-10 21:36:20.899613] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:20.324 [2024-12-10 21:36:20.899705] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:20.324 21:36:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.324 21:36:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:20.324 21:36:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:20.324 21:36:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:20.324 21:36:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.324 21:36:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.324 21:36:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:20.324 21:36:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.324 21:36:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:20.324 21:36:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:20.324 21:36:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:09:20.324 21:36:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 62039 00:09:20.324 21:36:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 62039 ']' 00:09:20.324 21:36:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 62039 00:09:20.324 21:36:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:09:20.324 21:36:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:20.324 21:36:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62039 00:09:20.324 21:36:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:20.324 21:36:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:20.324 21:36:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62039' 00:09:20.324 killing process with pid 62039 00:09:20.324 21:36:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 62039 00:09:20.324 [2024-12-10 21:36:21.094448] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:20.324 21:36:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 62039 00:09:20.583 [2024-12-10 21:36:21.111826] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:21.968 21:36:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:09:21.968 00:09:21.968 real 0m5.351s 00:09:21.968 user 0m7.734s 00:09:21.968 sys 0m0.875s 00:09:21.968 21:36:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:21.968 21:36:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.968 ************************************ 00:09:21.968 END TEST raid_state_function_test_sb 00:09:21.968 ************************************ 00:09:21.968 21:36:22 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 2 00:09:21.968 21:36:22 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:21.968 21:36:22 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:21.968 21:36:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:21.968 ************************************ 00:09:21.968 START TEST raid_superblock_test 00:09:21.968 ************************************ 00:09:21.968 21:36:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 2 00:09:21.968 21:36:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:09:21.968 21:36:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:09:21.968 21:36:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:09:21.968 21:36:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:09:21.968 21:36:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:09:21.968 21:36:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:09:21.968 21:36:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:09:21.968 21:36:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:09:21.968 21:36:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:09:21.968 21:36:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:09:21.968 21:36:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:09:21.968 21:36:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:09:21.968 21:36:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:09:21.968 21:36:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:09:21.968 21:36:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:09:21.968 21:36:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:09:21.968 21:36:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=62297 00:09:21.968 21:36:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:09:21.968 21:36:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 62297 00:09:21.968 21:36:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 62297 ']' 00:09:21.968 21:36:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:21.968 21:36:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:21.968 21:36:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:21.968 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:21.968 21:36:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:21.968 21:36:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.968 [2024-12-10 21:36:22.479746] Starting SPDK v25.01-pre git sha1 cec5ba284 / DPDK 24.03.0 initialization... 00:09:21.968 [2024-12-10 21:36:22.479981] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62297 ] 00:09:21.968 [2024-12-10 21:36:22.652374] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:22.228 [2024-12-10 21:36:22.769793] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:22.228 [2024-12-10 21:36:22.988995] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:22.228 [2024-12-10 21:36:22.989171] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:22.797 21:36:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:22.797 21:36:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:09:22.797 21:36:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:09:22.797 21:36:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:22.797 21:36:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:09:22.797 21:36:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:09:22.797 21:36:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:09:22.797 21:36:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:22.797 21:36:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:22.797 21:36:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:22.797 21:36:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:09:22.797 21:36:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.797 21:36:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.797 malloc1 00:09:22.797 21:36:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.797 21:36:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:22.797 21:36:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.797 21:36:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.797 [2024-12-10 21:36:23.409012] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:22.797 [2024-12-10 21:36:23.409082] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:22.797 [2024-12-10 21:36:23.409110] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:09:22.797 [2024-12-10 21:36:23.409121] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:22.797 [2024-12-10 21:36:23.411569] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:22.797 [2024-12-10 21:36:23.411625] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:22.797 pt1 00:09:22.797 21:36:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.797 21:36:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:22.797 21:36:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:22.797 21:36:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:09:22.797 21:36:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:09:22.797 21:36:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:09:22.797 21:36:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:22.797 21:36:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:22.797 21:36:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:22.797 21:36:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:09:22.797 21:36:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.797 21:36:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.797 malloc2 00:09:22.797 21:36:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.797 21:36:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:22.797 21:36:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.797 21:36:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.797 [2024-12-10 21:36:23.468469] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:22.797 [2024-12-10 21:36:23.468551] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:22.797 [2024-12-10 21:36:23.468582] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:09:22.797 [2024-12-10 21:36:23.468592] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:22.797 [2024-12-10 21:36:23.470929] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:22.797 [2024-12-10 21:36:23.470975] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:22.797 pt2 00:09:22.797 21:36:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.797 21:36:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:22.797 21:36:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:22.797 21:36:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:09:22.797 21:36:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.797 21:36:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.797 [2024-12-10 21:36:23.476529] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:22.797 [2024-12-10 21:36:23.478428] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:22.797 [2024-12-10 21:36:23.478608] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:09:22.797 [2024-12-10 21:36:23.478622] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:09:22.797 [2024-12-10 21:36:23.478904] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:22.797 [2024-12-10 21:36:23.479055] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:09:22.797 [2024-12-10 21:36:23.479066] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:09:22.797 [2024-12-10 21:36:23.479241] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:22.797 21:36:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.797 21:36:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:09:22.797 21:36:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:22.797 21:36:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:22.797 21:36:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:22.797 21:36:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:22.797 21:36:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:22.797 21:36:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:22.797 21:36:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:22.797 21:36:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:22.797 21:36:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:22.798 21:36:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:22.798 21:36:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:22.798 21:36:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.798 21:36:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.798 21:36:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.798 21:36:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:22.798 "name": "raid_bdev1", 00:09:22.798 "uuid": "f2d032db-9860-4efd-a2fa-e9d72e64c1b2", 00:09:22.798 "strip_size_kb": 64, 00:09:22.798 "state": "online", 00:09:22.798 "raid_level": "concat", 00:09:22.798 "superblock": true, 00:09:22.798 "num_base_bdevs": 2, 00:09:22.798 "num_base_bdevs_discovered": 2, 00:09:22.798 "num_base_bdevs_operational": 2, 00:09:22.798 "base_bdevs_list": [ 00:09:22.798 { 00:09:22.798 "name": "pt1", 00:09:22.798 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:22.798 "is_configured": true, 00:09:22.798 "data_offset": 2048, 00:09:22.798 "data_size": 63488 00:09:22.798 }, 00:09:22.798 { 00:09:22.798 "name": "pt2", 00:09:22.798 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:22.798 "is_configured": true, 00:09:22.798 "data_offset": 2048, 00:09:22.798 "data_size": 63488 00:09:22.798 } 00:09:22.798 ] 00:09:22.798 }' 00:09:22.798 21:36:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:22.798 21:36:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.367 21:36:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:09:23.367 21:36:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:23.367 21:36:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:23.367 21:36:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:23.367 21:36:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:23.367 21:36:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:23.367 21:36:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:23.367 21:36:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:23.367 21:36:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.367 21:36:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.367 [2024-12-10 21:36:23.932091] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:23.367 21:36:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.367 21:36:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:23.367 "name": "raid_bdev1", 00:09:23.367 "aliases": [ 00:09:23.367 "f2d032db-9860-4efd-a2fa-e9d72e64c1b2" 00:09:23.367 ], 00:09:23.367 "product_name": "Raid Volume", 00:09:23.367 "block_size": 512, 00:09:23.367 "num_blocks": 126976, 00:09:23.367 "uuid": "f2d032db-9860-4efd-a2fa-e9d72e64c1b2", 00:09:23.367 "assigned_rate_limits": { 00:09:23.367 "rw_ios_per_sec": 0, 00:09:23.367 "rw_mbytes_per_sec": 0, 00:09:23.367 "r_mbytes_per_sec": 0, 00:09:23.367 "w_mbytes_per_sec": 0 00:09:23.368 }, 00:09:23.368 "claimed": false, 00:09:23.368 "zoned": false, 00:09:23.368 "supported_io_types": { 00:09:23.368 "read": true, 00:09:23.368 "write": true, 00:09:23.368 "unmap": true, 00:09:23.368 "flush": true, 00:09:23.368 "reset": true, 00:09:23.368 "nvme_admin": false, 00:09:23.368 "nvme_io": false, 00:09:23.368 "nvme_io_md": false, 00:09:23.368 "write_zeroes": true, 00:09:23.368 "zcopy": false, 00:09:23.368 "get_zone_info": false, 00:09:23.368 "zone_management": false, 00:09:23.368 "zone_append": false, 00:09:23.368 "compare": false, 00:09:23.368 "compare_and_write": false, 00:09:23.368 "abort": false, 00:09:23.368 "seek_hole": false, 00:09:23.368 "seek_data": false, 00:09:23.368 "copy": false, 00:09:23.368 "nvme_iov_md": false 00:09:23.368 }, 00:09:23.368 "memory_domains": [ 00:09:23.368 { 00:09:23.368 "dma_device_id": "system", 00:09:23.368 "dma_device_type": 1 00:09:23.368 }, 00:09:23.368 { 00:09:23.368 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:23.368 "dma_device_type": 2 00:09:23.368 }, 00:09:23.368 { 00:09:23.368 "dma_device_id": "system", 00:09:23.368 "dma_device_type": 1 00:09:23.368 }, 00:09:23.368 { 00:09:23.368 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:23.368 "dma_device_type": 2 00:09:23.368 } 00:09:23.368 ], 00:09:23.368 "driver_specific": { 00:09:23.368 "raid": { 00:09:23.368 "uuid": "f2d032db-9860-4efd-a2fa-e9d72e64c1b2", 00:09:23.368 "strip_size_kb": 64, 00:09:23.368 "state": "online", 00:09:23.368 "raid_level": "concat", 00:09:23.368 "superblock": true, 00:09:23.368 "num_base_bdevs": 2, 00:09:23.368 "num_base_bdevs_discovered": 2, 00:09:23.368 "num_base_bdevs_operational": 2, 00:09:23.368 "base_bdevs_list": [ 00:09:23.368 { 00:09:23.368 "name": "pt1", 00:09:23.368 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:23.368 "is_configured": true, 00:09:23.368 "data_offset": 2048, 00:09:23.368 "data_size": 63488 00:09:23.368 }, 00:09:23.368 { 00:09:23.368 "name": "pt2", 00:09:23.368 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:23.368 "is_configured": true, 00:09:23.368 "data_offset": 2048, 00:09:23.368 "data_size": 63488 00:09:23.368 } 00:09:23.368 ] 00:09:23.368 } 00:09:23.368 } 00:09:23.368 }' 00:09:23.368 21:36:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:23.368 21:36:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:23.368 pt2' 00:09:23.368 21:36:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:23.368 21:36:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:23.368 21:36:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:23.368 21:36:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:23.368 21:36:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:23.368 21:36:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.368 21:36:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.368 21:36:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.368 21:36:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:23.368 21:36:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:23.368 21:36:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:23.368 21:36:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:23.368 21:36:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:23.368 21:36:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.368 21:36:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.368 21:36:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.368 21:36:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:23.368 21:36:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:23.368 21:36:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:23.368 21:36:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.368 21:36:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:09:23.368 21:36:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.368 [2024-12-10 21:36:24.139700] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:23.629 21:36:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.629 21:36:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=f2d032db-9860-4efd-a2fa-e9d72e64c1b2 00:09:23.629 21:36:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z f2d032db-9860-4efd-a2fa-e9d72e64c1b2 ']' 00:09:23.629 21:36:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:23.629 21:36:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.629 21:36:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.629 [2024-12-10 21:36:24.175335] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:23.629 [2024-12-10 21:36:24.175366] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:23.629 [2024-12-10 21:36:24.175484] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:23.629 [2024-12-10 21:36:24.175534] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:23.629 [2024-12-10 21:36:24.175548] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:09:23.629 21:36:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.629 21:36:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:09:23.629 21:36:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:23.629 21:36:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.629 21:36:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.629 21:36:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.629 21:36:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:09:23.629 21:36:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:09:23.629 21:36:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:23.629 21:36:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:09:23.629 21:36:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.629 21:36:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.629 21:36:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.629 21:36:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:23.629 21:36:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:09:23.629 21:36:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.629 21:36:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.629 21:36:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.629 21:36:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:09:23.629 21:36:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.629 21:36:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.629 21:36:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:09:23.629 21:36:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.629 21:36:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:09:23.629 21:36:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:09:23.629 21:36:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:09:23.629 21:36:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:09:23.629 21:36:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:09:23.629 21:36:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:23.629 21:36:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:09:23.629 21:36:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:23.629 21:36:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:09:23.629 21:36:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.629 21:36:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.629 [2024-12-10 21:36:24.303203] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:09:23.629 [2024-12-10 21:36:24.305413] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:09:23.629 [2024-12-10 21:36:24.305522] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:09:23.629 [2024-12-10 21:36:24.305585] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:09:23.629 [2024-12-10 21:36:24.305602] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:23.629 [2024-12-10 21:36:24.305614] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:09:23.629 request: 00:09:23.629 { 00:09:23.629 "name": "raid_bdev1", 00:09:23.629 "raid_level": "concat", 00:09:23.629 "base_bdevs": [ 00:09:23.629 "malloc1", 00:09:23.629 "malloc2" 00:09:23.629 ], 00:09:23.629 "strip_size_kb": 64, 00:09:23.629 "superblock": false, 00:09:23.629 "method": "bdev_raid_create", 00:09:23.629 "req_id": 1 00:09:23.629 } 00:09:23.629 Got JSON-RPC error response 00:09:23.629 response: 00:09:23.629 { 00:09:23.629 "code": -17, 00:09:23.629 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:09:23.629 } 00:09:23.629 21:36:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:09:23.629 21:36:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:09:23.629 21:36:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:23.629 21:36:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:23.629 21:36:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:23.629 21:36:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:09:23.629 21:36:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:23.629 21:36:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.629 21:36:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.629 21:36:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.629 21:36:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:09:23.629 21:36:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:09:23.629 21:36:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:23.629 21:36:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.629 21:36:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.629 [2024-12-10 21:36:24.351075] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:23.629 [2024-12-10 21:36:24.351203] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:23.629 [2024-12-10 21:36:24.351257] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:09:23.629 [2024-12-10 21:36:24.351301] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:23.629 [2024-12-10 21:36:24.353905] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:23.629 [2024-12-10 21:36:24.353999] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:23.629 [2024-12-10 21:36:24.354133] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:23.629 [2024-12-10 21:36:24.354234] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:23.629 pt1 00:09:23.629 21:36:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.629 21:36:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 2 00:09:23.629 21:36:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:23.629 21:36:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:23.629 21:36:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:23.629 21:36:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:23.630 21:36:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:23.630 21:36:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:23.630 21:36:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:23.630 21:36:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:23.630 21:36:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:23.630 21:36:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:23.630 21:36:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:23.630 21:36:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.630 21:36:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.630 21:36:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.630 21:36:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:23.630 "name": "raid_bdev1", 00:09:23.630 "uuid": "f2d032db-9860-4efd-a2fa-e9d72e64c1b2", 00:09:23.630 "strip_size_kb": 64, 00:09:23.630 "state": "configuring", 00:09:23.630 "raid_level": "concat", 00:09:23.630 "superblock": true, 00:09:23.630 "num_base_bdevs": 2, 00:09:23.630 "num_base_bdevs_discovered": 1, 00:09:23.630 "num_base_bdevs_operational": 2, 00:09:23.630 "base_bdevs_list": [ 00:09:23.630 { 00:09:23.630 "name": "pt1", 00:09:23.630 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:23.630 "is_configured": true, 00:09:23.630 "data_offset": 2048, 00:09:23.630 "data_size": 63488 00:09:23.630 }, 00:09:23.630 { 00:09:23.630 "name": null, 00:09:23.630 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:23.630 "is_configured": false, 00:09:23.630 "data_offset": 2048, 00:09:23.630 "data_size": 63488 00:09:23.630 } 00:09:23.630 ] 00:09:23.630 }' 00:09:23.630 21:36:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:23.630 21:36:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.199 21:36:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:09:24.199 21:36:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:09:24.199 21:36:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:24.199 21:36:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:24.199 21:36:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.199 21:36:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.199 [2024-12-10 21:36:24.762372] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:24.199 [2024-12-10 21:36:24.762509] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:24.199 [2024-12-10 21:36:24.762556] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:09:24.199 [2024-12-10 21:36:24.762620] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:24.199 [2024-12-10 21:36:24.763137] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:24.199 [2024-12-10 21:36:24.763212] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:24.199 [2024-12-10 21:36:24.763338] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:24.199 [2024-12-10 21:36:24.763400] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:24.199 [2024-12-10 21:36:24.763576] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:24.199 [2024-12-10 21:36:24.763636] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:09:24.199 [2024-12-10 21:36:24.763923] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:24.199 [2024-12-10 21:36:24.764113] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:24.199 [2024-12-10 21:36:24.764155] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:24.199 [2024-12-10 21:36:24.764362] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:24.199 pt2 00:09:24.199 21:36:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.199 21:36:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:24.199 21:36:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:24.199 21:36:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:09:24.199 21:36:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:24.199 21:36:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:24.199 21:36:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:24.199 21:36:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:24.199 21:36:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:24.199 21:36:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:24.199 21:36:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:24.199 21:36:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:24.199 21:36:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:24.199 21:36:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:24.199 21:36:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.199 21:36:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.199 21:36:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:24.199 21:36:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.199 21:36:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:24.199 "name": "raid_bdev1", 00:09:24.199 "uuid": "f2d032db-9860-4efd-a2fa-e9d72e64c1b2", 00:09:24.199 "strip_size_kb": 64, 00:09:24.199 "state": "online", 00:09:24.199 "raid_level": "concat", 00:09:24.199 "superblock": true, 00:09:24.199 "num_base_bdevs": 2, 00:09:24.199 "num_base_bdevs_discovered": 2, 00:09:24.199 "num_base_bdevs_operational": 2, 00:09:24.199 "base_bdevs_list": [ 00:09:24.199 { 00:09:24.199 "name": "pt1", 00:09:24.199 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:24.199 "is_configured": true, 00:09:24.199 "data_offset": 2048, 00:09:24.199 "data_size": 63488 00:09:24.199 }, 00:09:24.199 { 00:09:24.199 "name": "pt2", 00:09:24.199 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:24.199 "is_configured": true, 00:09:24.199 "data_offset": 2048, 00:09:24.199 "data_size": 63488 00:09:24.199 } 00:09:24.199 ] 00:09:24.199 }' 00:09:24.199 21:36:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:24.199 21:36:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.460 21:36:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:09:24.460 21:36:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:24.460 21:36:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:24.460 21:36:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:24.460 21:36:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:24.460 21:36:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:24.460 21:36:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:24.460 21:36:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:24.460 21:36:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.460 21:36:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.460 [2024-12-10 21:36:25.237788] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:24.720 21:36:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.720 21:36:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:24.720 "name": "raid_bdev1", 00:09:24.720 "aliases": [ 00:09:24.720 "f2d032db-9860-4efd-a2fa-e9d72e64c1b2" 00:09:24.720 ], 00:09:24.720 "product_name": "Raid Volume", 00:09:24.720 "block_size": 512, 00:09:24.720 "num_blocks": 126976, 00:09:24.720 "uuid": "f2d032db-9860-4efd-a2fa-e9d72e64c1b2", 00:09:24.720 "assigned_rate_limits": { 00:09:24.720 "rw_ios_per_sec": 0, 00:09:24.720 "rw_mbytes_per_sec": 0, 00:09:24.720 "r_mbytes_per_sec": 0, 00:09:24.720 "w_mbytes_per_sec": 0 00:09:24.720 }, 00:09:24.720 "claimed": false, 00:09:24.720 "zoned": false, 00:09:24.720 "supported_io_types": { 00:09:24.720 "read": true, 00:09:24.720 "write": true, 00:09:24.720 "unmap": true, 00:09:24.720 "flush": true, 00:09:24.720 "reset": true, 00:09:24.720 "nvme_admin": false, 00:09:24.720 "nvme_io": false, 00:09:24.720 "nvme_io_md": false, 00:09:24.720 "write_zeroes": true, 00:09:24.720 "zcopy": false, 00:09:24.720 "get_zone_info": false, 00:09:24.720 "zone_management": false, 00:09:24.720 "zone_append": false, 00:09:24.720 "compare": false, 00:09:24.720 "compare_and_write": false, 00:09:24.720 "abort": false, 00:09:24.720 "seek_hole": false, 00:09:24.720 "seek_data": false, 00:09:24.720 "copy": false, 00:09:24.720 "nvme_iov_md": false 00:09:24.720 }, 00:09:24.720 "memory_domains": [ 00:09:24.720 { 00:09:24.720 "dma_device_id": "system", 00:09:24.720 "dma_device_type": 1 00:09:24.720 }, 00:09:24.720 { 00:09:24.720 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:24.720 "dma_device_type": 2 00:09:24.720 }, 00:09:24.720 { 00:09:24.720 "dma_device_id": "system", 00:09:24.720 "dma_device_type": 1 00:09:24.720 }, 00:09:24.720 { 00:09:24.720 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:24.720 "dma_device_type": 2 00:09:24.720 } 00:09:24.720 ], 00:09:24.720 "driver_specific": { 00:09:24.720 "raid": { 00:09:24.720 "uuid": "f2d032db-9860-4efd-a2fa-e9d72e64c1b2", 00:09:24.720 "strip_size_kb": 64, 00:09:24.720 "state": "online", 00:09:24.720 "raid_level": "concat", 00:09:24.720 "superblock": true, 00:09:24.720 "num_base_bdevs": 2, 00:09:24.720 "num_base_bdevs_discovered": 2, 00:09:24.720 "num_base_bdevs_operational": 2, 00:09:24.720 "base_bdevs_list": [ 00:09:24.720 { 00:09:24.720 "name": "pt1", 00:09:24.720 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:24.720 "is_configured": true, 00:09:24.720 "data_offset": 2048, 00:09:24.720 "data_size": 63488 00:09:24.720 }, 00:09:24.720 { 00:09:24.720 "name": "pt2", 00:09:24.720 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:24.720 "is_configured": true, 00:09:24.720 "data_offset": 2048, 00:09:24.720 "data_size": 63488 00:09:24.720 } 00:09:24.720 ] 00:09:24.720 } 00:09:24.720 } 00:09:24.720 }' 00:09:24.720 21:36:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:24.720 21:36:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:24.720 pt2' 00:09:24.720 21:36:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:24.720 21:36:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:24.720 21:36:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:24.720 21:36:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:24.720 21:36:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:24.720 21:36:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.720 21:36:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.720 21:36:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.720 21:36:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:24.720 21:36:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:24.720 21:36:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:24.720 21:36:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:24.720 21:36:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:24.720 21:36:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.720 21:36:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.720 21:36:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.720 21:36:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:24.720 21:36:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:24.720 21:36:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:09:24.720 21:36:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:24.720 21:36:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.720 21:36:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.720 [2024-12-10 21:36:25.461442] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:24.720 21:36:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.981 21:36:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' f2d032db-9860-4efd-a2fa-e9d72e64c1b2 '!=' f2d032db-9860-4efd-a2fa-e9d72e64c1b2 ']' 00:09:24.981 21:36:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:09:24.981 21:36:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:24.981 21:36:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:24.981 21:36:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 62297 00:09:24.981 21:36:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 62297 ']' 00:09:24.981 21:36:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 62297 00:09:24.981 21:36:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:09:24.981 21:36:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:24.981 21:36:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62297 00:09:24.981 killing process with pid 62297 00:09:24.981 21:36:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:24.981 21:36:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:24.981 21:36:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62297' 00:09:24.981 21:36:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 62297 00:09:24.981 21:36:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 62297 00:09:24.981 [2024-12-10 21:36:25.544595] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:24.981 [2024-12-10 21:36:25.544716] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:24.981 [2024-12-10 21:36:25.544780] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:24.981 [2024-12-10 21:36:25.544793] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:25.240 [2024-12-10 21:36:25.766850] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:26.178 21:36:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:09:26.178 00:09:26.178 real 0m4.541s 00:09:26.178 user 0m6.316s 00:09:26.178 sys 0m0.772s 00:09:26.178 21:36:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:26.178 ************************************ 00:09:26.178 END TEST raid_superblock_test 00:09:26.178 ************************************ 00:09:26.178 21:36:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.438 21:36:26 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 2 read 00:09:26.438 21:36:26 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:26.438 21:36:26 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:26.438 21:36:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:26.438 ************************************ 00:09:26.438 START TEST raid_read_error_test 00:09:26.438 ************************************ 00:09:26.438 21:36:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 2 read 00:09:26.438 21:36:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:09:26.438 21:36:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:09:26.438 21:36:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:09:26.438 21:36:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:26.438 21:36:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:26.438 21:36:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:26.438 21:36:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:26.438 21:36:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:26.438 21:36:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:26.438 21:36:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:26.438 21:36:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:26.438 21:36:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:09:26.438 21:36:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:26.438 21:36:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:26.438 21:36:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:26.438 21:36:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:26.438 21:36:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:26.438 21:36:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:26.438 21:36:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:09:26.438 21:36:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:26.438 21:36:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:26.438 21:36:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:26.438 21:36:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.UFpg0VP5pU 00:09:26.438 21:36:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=62504 00:09:26.438 21:36:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:26.438 21:36:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 62504 00:09:26.438 21:36:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 62504 ']' 00:09:26.438 21:36:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:26.438 21:36:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:26.438 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:26.438 21:36:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:26.438 21:36:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:26.438 21:36:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.438 [2024-12-10 21:36:27.108631] Starting SPDK v25.01-pre git sha1 cec5ba284 / DPDK 24.03.0 initialization... 00:09:26.438 [2024-12-10 21:36:27.108758] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62504 ] 00:09:26.742 [2024-12-10 21:36:27.289259] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:26.742 [2024-12-10 21:36:27.412993] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:27.001 [2024-12-10 21:36:27.632168] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:27.001 [2024-12-10 21:36:27.632238] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:27.260 21:36:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:27.260 21:36:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:09:27.260 21:36:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:27.260 21:36:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:27.260 21:36:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.260 21:36:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.260 BaseBdev1_malloc 00:09:27.260 21:36:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.260 21:36:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:27.260 21:36:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.260 21:36:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.519 true 00:09:27.519 21:36:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.519 21:36:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:27.519 21:36:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.519 21:36:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.519 [2024-12-10 21:36:28.052905] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:27.519 [2024-12-10 21:36:28.052975] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:27.519 [2024-12-10 21:36:28.052997] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:27.519 [2024-12-10 21:36:28.053008] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:27.519 [2024-12-10 21:36:28.055494] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:27.519 [2024-12-10 21:36:28.055542] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:27.519 BaseBdev1 00:09:27.519 21:36:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.519 21:36:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:27.519 21:36:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:27.519 21:36:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.519 21:36:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.519 BaseBdev2_malloc 00:09:27.519 21:36:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.519 21:36:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:27.519 21:36:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.519 21:36:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.519 true 00:09:27.519 21:36:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.519 21:36:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:27.519 21:36:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.519 21:36:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.519 [2024-12-10 21:36:28.116787] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:27.519 [2024-12-10 21:36:28.116855] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:27.519 [2024-12-10 21:36:28.116874] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:27.519 [2024-12-10 21:36:28.116885] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:27.519 [2024-12-10 21:36:28.119224] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:27.520 [2024-12-10 21:36:28.119268] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:27.520 BaseBdev2 00:09:27.520 21:36:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.520 21:36:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:09:27.520 21:36:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.520 21:36:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.520 [2024-12-10 21:36:28.128835] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:27.520 [2024-12-10 21:36:28.130830] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:27.520 [2024-12-10 21:36:28.131064] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:27.520 [2024-12-10 21:36:28.131080] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:09:27.520 [2024-12-10 21:36:28.131321] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:27.520 [2024-12-10 21:36:28.131518] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:27.520 [2024-12-10 21:36:28.131532] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:27.520 [2024-12-10 21:36:28.131745] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:27.520 21:36:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.520 21:36:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:09:27.520 21:36:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:27.520 21:36:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:27.520 21:36:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:27.520 21:36:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:27.520 21:36:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:27.520 21:36:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:27.520 21:36:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:27.520 21:36:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:27.520 21:36:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:27.520 21:36:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:27.520 21:36:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:27.520 21:36:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.520 21:36:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.520 21:36:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.520 21:36:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:27.520 "name": "raid_bdev1", 00:09:27.520 "uuid": "80115dd4-b586-40eb-bb8b-1eb6c9f2964e", 00:09:27.520 "strip_size_kb": 64, 00:09:27.520 "state": "online", 00:09:27.520 "raid_level": "concat", 00:09:27.520 "superblock": true, 00:09:27.520 "num_base_bdevs": 2, 00:09:27.520 "num_base_bdevs_discovered": 2, 00:09:27.520 "num_base_bdevs_operational": 2, 00:09:27.520 "base_bdevs_list": [ 00:09:27.520 { 00:09:27.520 "name": "BaseBdev1", 00:09:27.520 "uuid": "c5d4de7a-2fe5-5252-bfce-4a0a7c49bb6b", 00:09:27.520 "is_configured": true, 00:09:27.520 "data_offset": 2048, 00:09:27.520 "data_size": 63488 00:09:27.520 }, 00:09:27.520 { 00:09:27.520 "name": "BaseBdev2", 00:09:27.520 "uuid": "54b9fe76-5ddd-5ac7-860a-39d190a5042f", 00:09:27.520 "is_configured": true, 00:09:27.520 "data_offset": 2048, 00:09:27.520 "data_size": 63488 00:09:27.520 } 00:09:27.520 ] 00:09:27.520 }' 00:09:27.520 21:36:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:27.520 21:36:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.779 21:36:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:27.779 21:36:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:28.037 [2024-12-10 21:36:28.673494] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:09:28.977 21:36:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:09:28.977 21:36:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.977 21:36:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.977 21:36:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.977 21:36:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:28.977 21:36:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:09:28.977 21:36:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:09:28.977 21:36:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:09:28.977 21:36:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:28.977 21:36:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:28.977 21:36:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:28.977 21:36:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:28.977 21:36:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:28.977 21:36:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:28.977 21:36:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:28.977 21:36:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:28.977 21:36:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:28.977 21:36:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:28.977 21:36:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:28.977 21:36:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.977 21:36:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.977 21:36:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.977 21:36:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:28.977 "name": "raid_bdev1", 00:09:28.977 "uuid": "80115dd4-b586-40eb-bb8b-1eb6c9f2964e", 00:09:28.977 "strip_size_kb": 64, 00:09:28.977 "state": "online", 00:09:28.977 "raid_level": "concat", 00:09:28.977 "superblock": true, 00:09:28.977 "num_base_bdevs": 2, 00:09:28.977 "num_base_bdevs_discovered": 2, 00:09:28.977 "num_base_bdevs_operational": 2, 00:09:28.977 "base_bdevs_list": [ 00:09:28.977 { 00:09:28.977 "name": "BaseBdev1", 00:09:28.977 "uuid": "c5d4de7a-2fe5-5252-bfce-4a0a7c49bb6b", 00:09:28.977 "is_configured": true, 00:09:28.977 "data_offset": 2048, 00:09:28.977 "data_size": 63488 00:09:28.977 }, 00:09:28.977 { 00:09:28.977 "name": "BaseBdev2", 00:09:28.977 "uuid": "54b9fe76-5ddd-5ac7-860a-39d190a5042f", 00:09:28.977 "is_configured": true, 00:09:28.977 "data_offset": 2048, 00:09:28.977 "data_size": 63488 00:09:28.977 } 00:09:28.977 ] 00:09:28.977 }' 00:09:28.977 21:36:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:28.977 21:36:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.236 21:36:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:29.495 21:36:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.495 21:36:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.495 [2024-12-10 21:36:30.022123] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:29.495 [2024-12-10 21:36:30.022208] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:29.495 [2024-12-10 21:36:30.025308] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:29.495 [2024-12-10 21:36:30.025398] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:29.495 [2024-12-10 21:36:30.025483] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:29.495 [2024-12-10 21:36:30.025557] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:29.495 { 00:09:29.495 "results": [ 00:09:29.495 { 00:09:29.495 "job": "raid_bdev1", 00:09:29.495 "core_mask": "0x1", 00:09:29.495 "workload": "randrw", 00:09:29.495 "percentage": 50, 00:09:29.495 "status": "finished", 00:09:29.495 "queue_depth": 1, 00:09:29.495 "io_size": 131072, 00:09:29.495 "runtime": 1.349264, 00:09:29.495 "iops": 14181.805784486949, 00:09:29.495 "mibps": 1772.7257230608686, 00:09:29.495 "io_failed": 1, 00:09:29.495 "io_timeout": 0, 00:09:29.495 "avg_latency_us": 97.35915058930058, 00:09:29.495 "min_latency_us": 27.94759825327511, 00:09:29.495 "max_latency_us": 1438.071615720524 00:09:29.495 } 00:09:29.496 ], 00:09:29.496 "core_count": 1 00:09:29.496 } 00:09:29.496 21:36:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.496 21:36:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 62504 00:09:29.496 21:36:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 62504 ']' 00:09:29.496 21:36:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 62504 00:09:29.496 21:36:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:09:29.496 21:36:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:29.496 21:36:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62504 00:09:29.496 killing process with pid 62504 00:09:29.496 21:36:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:29.496 21:36:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:29.496 21:36:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62504' 00:09:29.496 21:36:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 62504 00:09:29.496 21:36:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 62504 00:09:29.496 [2024-12-10 21:36:30.070013] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:29.496 [2024-12-10 21:36:30.214728] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:30.877 21:36:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.UFpg0VP5pU 00:09:30.877 21:36:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:30.878 21:36:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:30.878 21:36:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:09:30.878 21:36:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:09:30.878 21:36:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:30.878 21:36:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:30.878 21:36:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:09:30.878 00:09:30.878 real 0m4.486s 00:09:30.878 user 0m5.379s 00:09:30.878 sys 0m0.534s 00:09:30.878 ************************************ 00:09:30.878 END TEST raid_read_error_test 00:09:30.878 ************************************ 00:09:30.878 21:36:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:30.878 21:36:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.878 21:36:31 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 2 write 00:09:30.878 21:36:31 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:30.878 21:36:31 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:30.878 21:36:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:30.878 ************************************ 00:09:30.878 START TEST raid_write_error_test 00:09:30.878 ************************************ 00:09:30.878 21:36:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 2 write 00:09:30.878 21:36:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:09:30.878 21:36:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:09:30.878 21:36:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:09:30.878 21:36:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:30.878 21:36:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:30.878 21:36:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:30.878 21:36:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:30.878 21:36:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:30.878 21:36:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:30.878 21:36:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:30.878 21:36:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:30.878 21:36:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:09:30.878 21:36:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:30.878 21:36:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:30.878 21:36:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:30.878 21:36:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:30.878 21:36:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:30.878 21:36:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:30.878 21:36:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:09:30.878 21:36:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:30.878 21:36:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:30.878 21:36:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:30.878 21:36:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.iX8jZQqKto 00:09:30.878 21:36:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=62650 00:09:30.878 21:36:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 62650 00:09:30.878 21:36:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:30.878 21:36:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 62650 ']' 00:09:30.878 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:30.878 21:36:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:30.878 21:36:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:30.878 21:36:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:30.878 21:36:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:30.878 21:36:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.138 [2024-12-10 21:36:31.668715] Starting SPDK v25.01-pre git sha1 cec5ba284 / DPDK 24.03.0 initialization... 00:09:31.138 [2024-12-10 21:36:31.668947] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62650 ] 00:09:31.138 [2024-12-10 21:36:31.842611] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:31.397 [2024-12-10 21:36:31.981971] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:31.656 [2024-12-10 21:36:32.193965] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:31.656 [2024-12-10 21:36:32.194167] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:31.915 21:36:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:31.915 21:36:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:09:31.915 21:36:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:31.915 21:36:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:31.915 21:36:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.915 21:36:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.915 BaseBdev1_malloc 00:09:31.915 21:36:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.915 21:36:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:31.915 21:36:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.915 21:36:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.915 true 00:09:31.915 21:36:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.915 21:36:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:31.915 21:36:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.915 21:36:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.915 [2024-12-10 21:36:32.641266] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:31.915 [2024-12-10 21:36:32.641336] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:31.915 [2024-12-10 21:36:32.641361] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:31.915 [2024-12-10 21:36:32.641373] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:31.915 [2024-12-10 21:36:32.643820] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:31.915 [2024-12-10 21:36:32.643867] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:31.915 BaseBdev1 00:09:31.915 21:36:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.915 21:36:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:31.915 21:36:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:31.915 21:36:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.915 21:36:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.173 BaseBdev2_malloc 00:09:32.173 21:36:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.173 21:36:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:32.173 21:36:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.173 21:36:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.173 true 00:09:32.173 21:36:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.173 21:36:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:32.173 21:36:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.173 21:36:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.173 [2024-12-10 21:36:32.714617] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:32.173 [2024-12-10 21:36:32.714687] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:32.174 [2024-12-10 21:36:32.714710] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:32.174 [2024-12-10 21:36:32.714722] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:32.174 [2024-12-10 21:36:32.717254] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:32.174 [2024-12-10 21:36:32.717395] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:32.174 BaseBdev2 00:09:32.174 21:36:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.174 21:36:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:09:32.174 21:36:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.174 21:36:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.174 [2024-12-10 21:36:32.726675] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:32.174 [2024-12-10 21:36:32.728894] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:32.174 [2024-12-10 21:36:32.729239] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:32.174 [2024-12-10 21:36:32.729306] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:09:32.174 [2024-12-10 21:36:32.729645] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:32.174 [2024-12-10 21:36:32.729885] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:32.174 [2024-12-10 21:36:32.729936] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:32.174 [2024-12-10 21:36:32.730180] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:32.174 21:36:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.174 21:36:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:09:32.174 21:36:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:32.174 21:36:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:32.174 21:36:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:32.174 21:36:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:32.174 21:36:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:32.174 21:36:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:32.174 21:36:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:32.174 21:36:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:32.174 21:36:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:32.174 21:36:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:32.174 21:36:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:32.174 21:36:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.174 21:36:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.174 21:36:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.174 21:36:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:32.174 "name": "raid_bdev1", 00:09:32.174 "uuid": "9b7b4397-f117-403d-98e5-833a06c55529", 00:09:32.174 "strip_size_kb": 64, 00:09:32.174 "state": "online", 00:09:32.174 "raid_level": "concat", 00:09:32.174 "superblock": true, 00:09:32.174 "num_base_bdevs": 2, 00:09:32.174 "num_base_bdevs_discovered": 2, 00:09:32.174 "num_base_bdevs_operational": 2, 00:09:32.174 "base_bdevs_list": [ 00:09:32.174 { 00:09:32.174 "name": "BaseBdev1", 00:09:32.174 "uuid": "a67c7835-51bd-555a-ade6-e05d4f05d51b", 00:09:32.174 "is_configured": true, 00:09:32.174 "data_offset": 2048, 00:09:32.174 "data_size": 63488 00:09:32.174 }, 00:09:32.174 { 00:09:32.174 "name": "BaseBdev2", 00:09:32.174 "uuid": "694c1f2d-8b63-5ba3-9153-a4d4739f366c", 00:09:32.174 "is_configured": true, 00:09:32.174 "data_offset": 2048, 00:09:32.174 "data_size": 63488 00:09:32.174 } 00:09:32.174 ] 00:09:32.174 }' 00:09:32.174 21:36:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:32.174 21:36:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.433 21:36:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:32.433 21:36:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:32.692 [2024-12-10 21:36:33.266951] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:09:33.630 21:36:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:09:33.630 21:36:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.630 21:36:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.630 21:36:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.630 21:36:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:33.630 21:36:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:09:33.630 21:36:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:09:33.630 21:36:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:09:33.630 21:36:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:33.630 21:36:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:33.630 21:36:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:33.630 21:36:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:33.630 21:36:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:33.630 21:36:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:33.630 21:36:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:33.630 21:36:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:33.630 21:36:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:33.630 21:36:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:33.630 21:36:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.630 21:36:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:33.630 21:36:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.630 21:36:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.630 21:36:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:33.630 "name": "raid_bdev1", 00:09:33.630 "uuid": "9b7b4397-f117-403d-98e5-833a06c55529", 00:09:33.630 "strip_size_kb": 64, 00:09:33.630 "state": "online", 00:09:33.630 "raid_level": "concat", 00:09:33.630 "superblock": true, 00:09:33.630 "num_base_bdevs": 2, 00:09:33.630 "num_base_bdevs_discovered": 2, 00:09:33.630 "num_base_bdevs_operational": 2, 00:09:33.630 "base_bdevs_list": [ 00:09:33.630 { 00:09:33.630 "name": "BaseBdev1", 00:09:33.630 "uuid": "a67c7835-51bd-555a-ade6-e05d4f05d51b", 00:09:33.630 "is_configured": true, 00:09:33.630 "data_offset": 2048, 00:09:33.630 "data_size": 63488 00:09:33.630 }, 00:09:33.630 { 00:09:33.630 "name": "BaseBdev2", 00:09:33.630 "uuid": "694c1f2d-8b63-5ba3-9153-a4d4739f366c", 00:09:33.630 "is_configured": true, 00:09:33.630 "data_offset": 2048, 00:09:33.630 "data_size": 63488 00:09:33.630 } 00:09:33.630 ] 00:09:33.630 }' 00:09:33.630 21:36:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:33.630 21:36:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.889 21:36:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:33.889 21:36:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.889 21:36:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.889 [2024-12-10 21:36:34.611132] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:33.889 [2024-12-10 21:36:34.611169] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:33.889 [2024-12-10 21:36:34.614214] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:33.889 [2024-12-10 21:36:34.614260] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:33.889 [2024-12-10 21:36:34.614290] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:33.889 [2024-12-10 21:36:34.614304] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:33.889 { 00:09:33.889 "results": [ 00:09:33.889 { 00:09:33.889 "job": "raid_bdev1", 00:09:33.889 "core_mask": "0x1", 00:09:33.889 "workload": "randrw", 00:09:33.889 "percentage": 50, 00:09:33.889 "status": "finished", 00:09:33.889 "queue_depth": 1, 00:09:33.889 "io_size": 131072, 00:09:33.889 "runtime": 1.345001, 00:09:33.889 "iops": 14310.026535296256, 00:09:33.889 "mibps": 1788.753316912032, 00:09:33.889 "io_failed": 1, 00:09:33.889 "io_timeout": 0, 00:09:33.889 "avg_latency_us": 96.60838878059582, 00:09:33.889 "min_latency_us": 28.618340611353712, 00:09:33.889 "max_latency_us": 1731.4096069868995 00:09:33.889 } 00:09:33.889 ], 00:09:33.889 "core_count": 1 00:09:33.889 } 00:09:33.889 21:36:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.889 21:36:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 62650 00:09:33.889 21:36:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 62650 ']' 00:09:33.889 21:36:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 62650 00:09:33.889 21:36:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:09:33.889 21:36:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:33.889 21:36:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62650 00:09:33.889 killing process with pid 62650 00:09:33.889 21:36:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:33.889 21:36:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:33.889 21:36:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62650' 00:09:33.889 21:36:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 62650 00:09:33.890 [2024-12-10 21:36:34.658896] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:33.890 21:36:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 62650 00:09:34.148 [2024-12-10 21:36:34.807880] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:35.524 21:36:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:35.524 21:36:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.iX8jZQqKto 00:09:35.524 21:36:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:35.524 21:36:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:09:35.524 21:36:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:09:35.524 21:36:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:35.524 21:36:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:35.524 21:36:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:09:35.524 00:09:35.524 real 0m4.616s 00:09:35.524 user 0m5.542s 00:09:35.524 sys 0m0.539s 00:09:35.524 21:36:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:35.524 21:36:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.524 ************************************ 00:09:35.524 END TEST raid_write_error_test 00:09:35.524 ************************************ 00:09:35.524 21:36:36 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:09:35.524 21:36:36 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 2 false 00:09:35.524 21:36:36 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:35.524 21:36:36 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:35.524 21:36:36 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:35.524 ************************************ 00:09:35.524 START TEST raid_state_function_test 00:09:35.524 ************************************ 00:09:35.524 21:36:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 false 00:09:35.524 21:36:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:09:35.524 21:36:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:09:35.524 21:36:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:09:35.524 21:36:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:35.524 21:36:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:35.524 21:36:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:35.524 21:36:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:35.524 21:36:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:35.524 21:36:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:35.525 21:36:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:35.525 21:36:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:35.525 21:36:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:35.525 21:36:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:09:35.525 21:36:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:35.525 21:36:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:35.525 21:36:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:35.525 21:36:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:35.525 21:36:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:35.525 21:36:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:09:35.525 21:36:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:09:35.525 21:36:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:09:35.525 21:36:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:09:35.525 21:36:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=62792 00:09:35.525 21:36:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:35.525 Process raid pid: 62792 00:09:35.525 21:36:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 62792' 00:09:35.525 21:36:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 62792 00:09:35.525 21:36:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 62792 ']' 00:09:35.525 21:36:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:35.525 21:36:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:35.525 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:35.525 21:36:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:35.525 21:36:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:35.525 21:36:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.783 [2024-12-10 21:36:36.337841] Starting SPDK v25.01-pre git sha1 cec5ba284 / DPDK 24.03.0 initialization... 00:09:35.783 [2024-12-10 21:36:36.338035] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:35.783 [2024-12-10 21:36:36.520520] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:36.043 [2024-12-10 21:36:36.661282] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:36.302 [2024-12-10 21:36:36.910493] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:36.303 [2024-12-10 21:36:36.910558] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:36.562 21:36:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:36.562 21:36:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:09:36.562 21:36:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:36.562 21:36:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.562 21:36:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.562 [2024-12-10 21:36:37.294499] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:36.562 [2024-12-10 21:36:37.294555] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:36.562 [2024-12-10 21:36:37.294571] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:36.562 [2024-12-10 21:36:37.294585] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:36.562 21:36:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.562 21:36:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:09:36.562 21:36:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:36.562 21:36:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:36.562 21:36:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:36.562 21:36:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:36.562 21:36:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:36.562 21:36:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:36.562 21:36:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:36.562 21:36:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:36.562 21:36:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:36.562 21:36:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:36.562 21:36:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:36.562 21:36:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.562 21:36:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.562 21:36:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.823 21:36:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:36.823 "name": "Existed_Raid", 00:09:36.823 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:36.823 "strip_size_kb": 0, 00:09:36.823 "state": "configuring", 00:09:36.823 "raid_level": "raid1", 00:09:36.823 "superblock": false, 00:09:36.823 "num_base_bdevs": 2, 00:09:36.823 "num_base_bdevs_discovered": 0, 00:09:36.823 "num_base_bdevs_operational": 2, 00:09:36.823 "base_bdevs_list": [ 00:09:36.823 { 00:09:36.823 "name": "BaseBdev1", 00:09:36.823 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:36.823 "is_configured": false, 00:09:36.823 "data_offset": 0, 00:09:36.823 "data_size": 0 00:09:36.823 }, 00:09:36.823 { 00:09:36.823 "name": "BaseBdev2", 00:09:36.823 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:36.823 "is_configured": false, 00:09:36.823 "data_offset": 0, 00:09:36.823 "data_size": 0 00:09:36.823 } 00:09:36.823 ] 00:09:36.823 }' 00:09:36.823 21:36:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:36.823 21:36:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.082 21:36:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:37.082 21:36:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.082 21:36:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.082 [2024-12-10 21:36:37.745679] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:37.082 [2024-12-10 21:36:37.745804] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:37.082 21:36:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.082 21:36:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:37.082 21:36:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.082 21:36:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.082 [2024-12-10 21:36:37.753666] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:37.082 [2024-12-10 21:36:37.753722] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:37.082 [2024-12-10 21:36:37.753738] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:37.082 [2024-12-10 21:36:37.753755] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:37.082 21:36:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.082 21:36:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:37.082 21:36:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.082 21:36:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.082 [2024-12-10 21:36:37.800965] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:37.082 BaseBdev1 00:09:37.082 21:36:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.082 21:36:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:37.083 21:36:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:37.083 21:36:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:37.083 21:36:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:37.083 21:36:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:37.083 21:36:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:37.083 21:36:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:37.083 21:36:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.083 21:36:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.083 21:36:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.083 21:36:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:37.083 21:36:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.083 21:36:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.083 [ 00:09:37.083 { 00:09:37.083 "name": "BaseBdev1", 00:09:37.083 "aliases": [ 00:09:37.083 "eee94fd1-68c8-4b3a-b4a0-e683ca8553bb" 00:09:37.083 ], 00:09:37.083 "product_name": "Malloc disk", 00:09:37.083 "block_size": 512, 00:09:37.083 "num_blocks": 65536, 00:09:37.083 "uuid": "eee94fd1-68c8-4b3a-b4a0-e683ca8553bb", 00:09:37.083 "assigned_rate_limits": { 00:09:37.083 "rw_ios_per_sec": 0, 00:09:37.083 "rw_mbytes_per_sec": 0, 00:09:37.083 "r_mbytes_per_sec": 0, 00:09:37.083 "w_mbytes_per_sec": 0 00:09:37.083 }, 00:09:37.083 "claimed": true, 00:09:37.083 "claim_type": "exclusive_write", 00:09:37.083 "zoned": false, 00:09:37.083 "supported_io_types": { 00:09:37.083 "read": true, 00:09:37.083 "write": true, 00:09:37.083 "unmap": true, 00:09:37.083 "flush": true, 00:09:37.083 "reset": true, 00:09:37.083 "nvme_admin": false, 00:09:37.083 "nvme_io": false, 00:09:37.083 "nvme_io_md": false, 00:09:37.083 "write_zeroes": true, 00:09:37.083 "zcopy": true, 00:09:37.083 "get_zone_info": false, 00:09:37.083 "zone_management": false, 00:09:37.083 "zone_append": false, 00:09:37.083 "compare": false, 00:09:37.083 "compare_and_write": false, 00:09:37.083 "abort": true, 00:09:37.083 "seek_hole": false, 00:09:37.083 "seek_data": false, 00:09:37.083 "copy": true, 00:09:37.083 "nvme_iov_md": false 00:09:37.083 }, 00:09:37.083 "memory_domains": [ 00:09:37.083 { 00:09:37.083 "dma_device_id": "system", 00:09:37.083 "dma_device_type": 1 00:09:37.083 }, 00:09:37.083 { 00:09:37.083 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:37.083 "dma_device_type": 2 00:09:37.083 } 00:09:37.083 ], 00:09:37.083 "driver_specific": {} 00:09:37.083 } 00:09:37.083 ] 00:09:37.083 21:36:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.083 21:36:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:37.083 21:36:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:09:37.083 21:36:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:37.083 21:36:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:37.083 21:36:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:37.083 21:36:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:37.083 21:36:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:37.083 21:36:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:37.083 21:36:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:37.083 21:36:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:37.083 21:36:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:37.083 21:36:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:37.083 21:36:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.083 21:36:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.083 21:36:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:37.083 21:36:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.342 21:36:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:37.342 "name": "Existed_Raid", 00:09:37.342 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:37.342 "strip_size_kb": 0, 00:09:37.342 "state": "configuring", 00:09:37.342 "raid_level": "raid1", 00:09:37.342 "superblock": false, 00:09:37.342 "num_base_bdevs": 2, 00:09:37.342 "num_base_bdevs_discovered": 1, 00:09:37.342 "num_base_bdevs_operational": 2, 00:09:37.342 "base_bdevs_list": [ 00:09:37.342 { 00:09:37.342 "name": "BaseBdev1", 00:09:37.342 "uuid": "eee94fd1-68c8-4b3a-b4a0-e683ca8553bb", 00:09:37.342 "is_configured": true, 00:09:37.342 "data_offset": 0, 00:09:37.342 "data_size": 65536 00:09:37.342 }, 00:09:37.342 { 00:09:37.342 "name": "BaseBdev2", 00:09:37.342 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:37.342 "is_configured": false, 00:09:37.342 "data_offset": 0, 00:09:37.342 "data_size": 0 00:09:37.342 } 00:09:37.342 ] 00:09:37.342 }' 00:09:37.342 21:36:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:37.342 21:36:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.603 21:36:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:37.603 21:36:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.603 21:36:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.603 [2024-12-10 21:36:38.268349] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:37.603 [2024-12-10 21:36:38.268416] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:37.603 21:36:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.603 21:36:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:37.603 21:36:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.603 21:36:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.603 [2024-12-10 21:36:38.280389] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:37.603 [2024-12-10 21:36:38.282414] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:37.603 [2024-12-10 21:36:38.282514] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:37.603 21:36:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.603 21:36:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:37.603 21:36:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:37.603 21:36:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:09:37.603 21:36:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:37.603 21:36:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:37.603 21:36:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:37.603 21:36:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:37.603 21:36:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:37.603 21:36:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:37.603 21:36:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:37.603 21:36:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:37.603 21:36:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:37.603 21:36:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:37.603 21:36:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.603 21:36:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.603 21:36:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:37.603 21:36:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.603 21:36:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:37.603 "name": "Existed_Raid", 00:09:37.603 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:37.603 "strip_size_kb": 0, 00:09:37.603 "state": "configuring", 00:09:37.603 "raid_level": "raid1", 00:09:37.603 "superblock": false, 00:09:37.603 "num_base_bdevs": 2, 00:09:37.603 "num_base_bdevs_discovered": 1, 00:09:37.603 "num_base_bdevs_operational": 2, 00:09:37.603 "base_bdevs_list": [ 00:09:37.603 { 00:09:37.603 "name": "BaseBdev1", 00:09:37.603 "uuid": "eee94fd1-68c8-4b3a-b4a0-e683ca8553bb", 00:09:37.603 "is_configured": true, 00:09:37.603 "data_offset": 0, 00:09:37.603 "data_size": 65536 00:09:37.603 }, 00:09:37.603 { 00:09:37.603 "name": "BaseBdev2", 00:09:37.603 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:37.603 "is_configured": false, 00:09:37.603 "data_offset": 0, 00:09:37.603 "data_size": 0 00:09:37.603 } 00:09:37.603 ] 00:09:37.603 }' 00:09:37.603 21:36:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:37.603 21:36:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.173 21:36:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:38.173 21:36:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.173 21:36:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.173 [2024-12-10 21:36:38.802897] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:38.173 [2024-12-10 21:36:38.803063] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:38.173 [2024-12-10 21:36:38.803079] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:09:38.173 [2024-12-10 21:36:38.803381] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:38.173 [2024-12-10 21:36:38.803622] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:38.173 [2024-12-10 21:36:38.803639] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:38.173 [2024-12-10 21:36:38.803984] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:38.173 BaseBdev2 00:09:38.173 21:36:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.173 21:36:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:38.173 21:36:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:38.173 21:36:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:38.173 21:36:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:38.173 21:36:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:38.173 21:36:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:38.173 21:36:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:38.173 21:36:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.173 21:36:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.173 21:36:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.173 21:36:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:38.173 21:36:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.173 21:36:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.173 [ 00:09:38.173 { 00:09:38.173 "name": "BaseBdev2", 00:09:38.173 "aliases": [ 00:09:38.173 "39210370-7280-4ab2-a45f-2b1c53c37b42" 00:09:38.173 ], 00:09:38.173 "product_name": "Malloc disk", 00:09:38.173 "block_size": 512, 00:09:38.173 "num_blocks": 65536, 00:09:38.173 "uuid": "39210370-7280-4ab2-a45f-2b1c53c37b42", 00:09:38.173 "assigned_rate_limits": { 00:09:38.173 "rw_ios_per_sec": 0, 00:09:38.173 "rw_mbytes_per_sec": 0, 00:09:38.173 "r_mbytes_per_sec": 0, 00:09:38.173 "w_mbytes_per_sec": 0 00:09:38.173 }, 00:09:38.173 "claimed": true, 00:09:38.173 "claim_type": "exclusive_write", 00:09:38.173 "zoned": false, 00:09:38.173 "supported_io_types": { 00:09:38.173 "read": true, 00:09:38.173 "write": true, 00:09:38.173 "unmap": true, 00:09:38.173 "flush": true, 00:09:38.173 "reset": true, 00:09:38.173 "nvme_admin": false, 00:09:38.173 "nvme_io": false, 00:09:38.173 "nvme_io_md": false, 00:09:38.173 "write_zeroes": true, 00:09:38.173 "zcopy": true, 00:09:38.173 "get_zone_info": false, 00:09:38.173 "zone_management": false, 00:09:38.173 "zone_append": false, 00:09:38.173 "compare": false, 00:09:38.173 "compare_and_write": false, 00:09:38.173 "abort": true, 00:09:38.173 "seek_hole": false, 00:09:38.173 "seek_data": false, 00:09:38.173 "copy": true, 00:09:38.173 "nvme_iov_md": false 00:09:38.173 }, 00:09:38.173 "memory_domains": [ 00:09:38.173 { 00:09:38.173 "dma_device_id": "system", 00:09:38.173 "dma_device_type": 1 00:09:38.173 }, 00:09:38.173 { 00:09:38.173 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:38.173 "dma_device_type": 2 00:09:38.173 } 00:09:38.173 ], 00:09:38.173 "driver_specific": {} 00:09:38.173 } 00:09:38.173 ] 00:09:38.173 21:36:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.173 21:36:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:38.173 21:36:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:38.173 21:36:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:38.173 21:36:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:09:38.173 21:36:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:38.173 21:36:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:38.173 21:36:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:38.173 21:36:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:38.173 21:36:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:38.173 21:36:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:38.173 21:36:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:38.173 21:36:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:38.173 21:36:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:38.173 21:36:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:38.173 21:36:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.173 21:36:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.173 21:36:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:38.173 21:36:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.173 21:36:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:38.173 "name": "Existed_Raid", 00:09:38.173 "uuid": "dd646963-6e0b-4d46-b935-f18f2b18ca3c", 00:09:38.173 "strip_size_kb": 0, 00:09:38.173 "state": "online", 00:09:38.173 "raid_level": "raid1", 00:09:38.173 "superblock": false, 00:09:38.173 "num_base_bdevs": 2, 00:09:38.173 "num_base_bdevs_discovered": 2, 00:09:38.173 "num_base_bdevs_operational": 2, 00:09:38.173 "base_bdevs_list": [ 00:09:38.173 { 00:09:38.173 "name": "BaseBdev1", 00:09:38.173 "uuid": "eee94fd1-68c8-4b3a-b4a0-e683ca8553bb", 00:09:38.173 "is_configured": true, 00:09:38.173 "data_offset": 0, 00:09:38.173 "data_size": 65536 00:09:38.173 }, 00:09:38.173 { 00:09:38.173 "name": "BaseBdev2", 00:09:38.173 "uuid": "39210370-7280-4ab2-a45f-2b1c53c37b42", 00:09:38.173 "is_configured": true, 00:09:38.173 "data_offset": 0, 00:09:38.173 "data_size": 65536 00:09:38.173 } 00:09:38.173 ] 00:09:38.173 }' 00:09:38.173 21:36:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:38.173 21:36:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.741 21:36:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:38.741 21:36:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:38.741 21:36:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:38.741 21:36:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:38.741 21:36:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:38.741 21:36:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:38.741 21:36:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:38.741 21:36:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:38.741 21:36:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.741 21:36:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.741 [2024-12-10 21:36:39.278438] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:38.741 21:36:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.741 21:36:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:38.741 "name": "Existed_Raid", 00:09:38.741 "aliases": [ 00:09:38.741 "dd646963-6e0b-4d46-b935-f18f2b18ca3c" 00:09:38.741 ], 00:09:38.741 "product_name": "Raid Volume", 00:09:38.741 "block_size": 512, 00:09:38.741 "num_blocks": 65536, 00:09:38.741 "uuid": "dd646963-6e0b-4d46-b935-f18f2b18ca3c", 00:09:38.741 "assigned_rate_limits": { 00:09:38.741 "rw_ios_per_sec": 0, 00:09:38.741 "rw_mbytes_per_sec": 0, 00:09:38.741 "r_mbytes_per_sec": 0, 00:09:38.741 "w_mbytes_per_sec": 0 00:09:38.741 }, 00:09:38.741 "claimed": false, 00:09:38.741 "zoned": false, 00:09:38.741 "supported_io_types": { 00:09:38.741 "read": true, 00:09:38.741 "write": true, 00:09:38.741 "unmap": false, 00:09:38.741 "flush": false, 00:09:38.741 "reset": true, 00:09:38.741 "nvme_admin": false, 00:09:38.741 "nvme_io": false, 00:09:38.741 "nvme_io_md": false, 00:09:38.741 "write_zeroes": true, 00:09:38.741 "zcopy": false, 00:09:38.741 "get_zone_info": false, 00:09:38.741 "zone_management": false, 00:09:38.741 "zone_append": false, 00:09:38.741 "compare": false, 00:09:38.741 "compare_and_write": false, 00:09:38.741 "abort": false, 00:09:38.741 "seek_hole": false, 00:09:38.741 "seek_data": false, 00:09:38.741 "copy": false, 00:09:38.741 "nvme_iov_md": false 00:09:38.741 }, 00:09:38.741 "memory_domains": [ 00:09:38.741 { 00:09:38.741 "dma_device_id": "system", 00:09:38.741 "dma_device_type": 1 00:09:38.741 }, 00:09:38.741 { 00:09:38.741 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:38.741 "dma_device_type": 2 00:09:38.741 }, 00:09:38.741 { 00:09:38.741 "dma_device_id": "system", 00:09:38.741 "dma_device_type": 1 00:09:38.741 }, 00:09:38.741 { 00:09:38.741 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:38.741 "dma_device_type": 2 00:09:38.741 } 00:09:38.741 ], 00:09:38.741 "driver_specific": { 00:09:38.741 "raid": { 00:09:38.741 "uuid": "dd646963-6e0b-4d46-b935-f18f2b18ca3c", 00:09:38.741 "strip_size_kb": 0, 00:09:38.741 "state": "online", 00:09:38.741 "raid_level": "raid1", 00:09:38.741 "superblock": false, 00:09:38.741 "num_base_bdevs": 2, 00:09:38.741 "num_base_bdevs_discovered": 2, 00:09:38.741 "num_base_bdevs_operational": 2, 00:09:38.741 "base_bdevs_list": [ 00:09:38.741 { 00:09:38.741 "name": "BaseBdev1", 00:09:38.741 "uuid": "eee94fd1-68c8-4b3a-b4a0-e683ca8553bb", 00:09:38.741 "is_configured": true, 00:09:38.741 "data_offset": 0, 00:09:38.741 "data_size": 65536 00:09:38.741 }, 00:09:38.741 { 00:09:38.741 "name": "BaseBdev2", 00:09:38.741 "uuid": "39210370-7280-4ab2-a45f-2b1c53c37b42", 00:09:38.741 "is_configured": true, 00:09:38.741 "data_offset": 0, 00:09:38.741 "data_size": 65536 00:09:38.741 } 00:09:38.741 ] 00:09:38.741 } 00:09:38.741 } 00:09:38.741 }' 00:09:38.741 21:36:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:38.741 21:36:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:38.741 BaseBdev2' 00:09:38.741 21:36:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:38.741 21:36:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:38.741 21:36:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:38.741 21:36:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:38.741 21:36:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:38.741 21:36:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.741 21:36:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.741 21:36:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.741 21:36:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:38.741 21:36:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:38.741 21:36:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:38.741 21:36:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:38.741 21:36:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:38.741 21:36:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.741 21:36:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.741 21:36:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.741 21:36:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:38.741 21:36:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:38.741 21:36:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:38.741 21:36:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.741 21:36:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.741 [2024-12-10 21:36:39.509816] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:39.000 21:36:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.000 21:36:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:39.000 21:36:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:09:39.000 21:36:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:39.000 21:36:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:09:39.000 21:36:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:09:39.001 21:36:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:09:39.001 21:36:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:39.001 21:36:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:39.001 21:36:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:39.001 21:36:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:39.001 21:36:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:09:39.001 21:36:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:39.001 21:36:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:39.001 21:36:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:39.001 21:36:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:39.001 21:36:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:39.001 21:36:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:39.001 21:36:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.001 21:36:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.001 21:36:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.001 21:36:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:39.001 "name": "Existed_Raid", 00:09:39.001 "uuid": "dd646963-6e0b-4d46-b935-f18f2b18ca3c", 00:09:39.001 "strip_size_kb": 0, 00:09:39.001 "state": "online", 00:09:39.001 "raid_level": "raid1", 00:09:39.001 "superblock": false, 00:09:39.001 "num_base_bdevs": 2, 00:09:39.001 "num_base_bdevs_discovered": 1, 00:09:39.001 "num_base_bdevs_operational": 1, 00:09:39.001 "base_bdevs_list": [ 00:09:39.001 { 00:09:39.001 "name": null, 00:09:39.001 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:39.001 "is_configured": false, 00:09:39.001 "data_offset": 0, 00:09:39.001 "data_size": 65536 00:09:39.001 }, 00:09:39.001 { 00:09:39.001 "name": "BaseBdev2", 00:09:39.001 "uuid": "39210370-7280-4ab2-a45f-2b1c53c37b42", 00:09:39.001 "is_configured": true, 00:09:39.001 "data_offset": 0, 00:09:39.001 "data_size": 65536 00:09:39.001 } 00:09:39.001 ] 00:09:39.001 }' 00:09:39.001 21:36:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:39.001 21:36:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.260 21:36:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:39.260 21:36:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:39.260 21:36:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:39.260 21:36:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:39.260 21:36:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.260 21:36:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.260 21:36:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.519 21:36:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:39.519 21:36:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:39.519 21:36:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:39.519 21:36:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.519 21:36:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.519 [2024-12-10 21:36:40.061846] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:39.519 [2024-12-10 21:36:40.061949] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:39.519 [2024-12-10 21:36:40.165820] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:39.519 [2024-12-10 21:36:40.165888] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:39.519 [2024-12-10 21:36:40.165902] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:39.519 21:36:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.519 21:36:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:39.519 21:36:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:39.519 21:36:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:39.519 21:36:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.519 21:36:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.519 21:36:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:39.519 21:36:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.519 21:36:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:39.519 21:36:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:39.519 21:36:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:09:39.519 21:36:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 62792 00:09:39.519 21:36:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 62792 ']' 00:09:39.519 21:36:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 62792 00:09:39.519 21:36:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:09:39.519 21:36:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:39.519 21:36:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62792 00:09:39.519 killing process with pid 62792 00:09:39.519 21:36:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:39.519 21:36:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:39.519 21:36:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62792' 00:09:39.519 21:36:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 62792 00:09:39.519 [2024-12-10 21:36:40.261502] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:39.519 21:36:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 62792 00:09:39.519 [2024-12-10 21:36:40.280521] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:41.034 21:36:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:09:41.034 00:09:41.034 real 0m5.215s 00:09:41.034 user 0m7.511s 00:09:41.034 sys 0m0.850s 00:09:41.034 21:36:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:41.034 21:36:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.034 ************************************ 00:09:41.034 END TEST raid_state_function_test 00:09:41.034 ************************************ 00:09:41.034 21:36:41 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 2 true 00:09:41.034 21:36:41 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:41.034 21:36:41 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:41.034 21:36:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:41.034 ************************************ 00:09:41.034 START TEST raid_state_function_test_sb 00:09:41.034 ************************************ 00:09:41.034 21:36:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:09:41.034 21:36:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:09:41.034 21:36:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:09:41.034 21:36:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:09:41.034 21:36:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:41.034 21:36:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:41.034 21:36:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:41.034 21:36:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:41.034 21:36:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:41.034 21:36:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:41.034 21:36:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:41.034 21:36:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:41.034 21:36:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:41.034 21:36:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:09:41.034 21:36:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:41.034 21:36:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:41.034 21:36:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:41.034 21:36:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:41.034 21:36:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:41.034 21:36:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:09:41.034 21:36:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:09:41.034 21:36:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:09:41.034 21:36:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:09:41.034 21:36:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=63041 00:09:41.034 21:36:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:41.034 21:36:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 63041' 00:09:41.034 Process raid pid: 63041 00:09:41.034 21:36:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 63041 00:09:41.034 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:41.034 21:36:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 63041 ']' 00:09:41.034 21:36:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:41.034 21:36:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:41.034 21:36:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:41.034 21:36:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:41.034 21:36:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.034 [2024-12-10 21:36:41.618196] Starting SPDK v25.01-pre git sha1 cec5ba284 / DPDK 24.03.0 initialization... 00:09:41.034 [2024-12-10 21:36:41.618399] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:41.034 [2024-12-10 21:36:41.794935] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:41.291 [2024-12-10 21:36:41.914100] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:41.549 [2024-12-10 21:36:42.126164] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:41.549 [2024-12-10 21:36:42.126309] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:41.809 21:36:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:41.809 21:36:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:09:41.809 21:36:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:41.809 21:36:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.809 21:36:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.809 [2024-12-10 21:36:42.480377] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:41.809 [2024-12-10 21:36:42.480510] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:41.809 [2024-12-10 21:36:42.480567] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:41.809 [2024-12-10 21:36:42.480619] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:41.809 21:36:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.809 21:36:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:09:41.809 21:36:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:41.809 21:36:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:41.809 21:36:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:41.809 21:36:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:41.809 21:36:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:41.809 21:36:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:41.809 21:36:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:41.809 21:36:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:41.809 21:36:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:41.809 21:36:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:41.809 21:36:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:41.809 21:36:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.809 21:36:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.809 21:36:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.809 21:36:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:41.809 "name": "Existed_Raid", 00:09:41.809 "uuid": "2dfeac62-c7ae-4972-a742-feae7aea1f34", 00:09:41.809 "strip_size_kb": 0, 00:09:41.809 "state": "configuring", 00:09:41.809 "raid_level": "raid1", 00:09:41.809 "superblock": true, 00:09:41.809 "num_base_bdevs": 2, 00:09:41.809 "num_base_bdevs_discovered": 0, 00:09:41.809 "num_base_bdevs_operational": 2, 00:09:41.809 "base_bdevs_list": [ 00:09:41.809 { 00:09:41.809 "name": "BaseBdev1", 00:09:41.809 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:41.809 "is_configured": false, 00:09:41.809 "data_offset": 0, 00:09:41.809 "data_size": 0 00:09:41.809 }, 00:09:41.809 { 00:09:41.809 "name": "BaseBdev2", 00:09:41.809 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:41.809 "is_configured": false, 00:09:41.809 "data_offset": 0, 00:09:41.809 "data_size": 0 00:09:41.809 } 00:09:41.809 ] 00:09:41.809 }' 00:09:41.809 21:36:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:41.809 21:36:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.376 21:36:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:42.376 21:36:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.376 21:36:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.376 [2024-12-10 21:36:42.947528] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:42.376 [2024-12-10 21:36:42.947574] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:42.376 21:36:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.376 21:36:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:42.376 21:36:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.376 21:36:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.376 [2024-12-10 21:36:42.955502] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:42.376 [2024-12-10 21:36:42.955547] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:42.376 [2024-12-10 21:36:42.955557] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:42.376 [2024-12-10 21:36:42.955568] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:42.376 21:36:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.376 21:36:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:42.376 21:36:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.376 21:36:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.376 [2024-12-10 21:36:43.000977] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:42.376 BaseBdev1 00:09:42.376 21:36:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.376 21:36:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:42.376 21:36:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:42.376 21:36:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:42.376 21:36:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:42.376 21:36:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:42.376 21:36:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:42.376 21:36:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:42.376 21:36:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.376 21:36:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.376 21:36:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.376 21:36:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:42.376 21:36:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.376 21:36:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.376 [ 00:09:42.376 { 00:09:42.376 "name": "BaseBdev1", 00:09:42.376 "aliases": [ 00:09:42.376 "6df80bf3-63c4-4b72-bd1f-ca0327267ab0" 00:09:42.376 ], 00:09:42.376 "product_name": "Malloc disk", 00:09:42.376 "block_size": 512, 00:09:42.376 "num_blocks": 65536, 00:09:42.376 "uuid": "6df80bf3-63c4-4b72-bd1f-ca0327267ab0", 00:09:42.376 "assigned_rate_limits": { 00:09:42.376 "rw_ios_per_sec": 0, 00:09:42.376 "rw_mbytes_per_sec": 0, 00:09:42.376 "r_mbytes_per_sec": 0, 00:09:42.376 "w_mbytes_per_sec": 0 00:09:42.376 }, 00:09:42.376 "claimed": true, 00:09:42.376 "claim_type": "exclusive_write", 00:09:42.376 "zoned": false, 00:09:42.376 "supported_io_types": { 00:09:42.376 "read": true, 00:09:42.376 "write": true, 00:09:42.376 "unmap": true, 00:09:42.376 "flush": true, 00:09:42.376 "reset": true, 00:09:42.376 "nvme_admin": false, 00:09:42.376 "nvme_io": false, 00:09:42.376 "nvme_io_md": false, 00:09:42.376 "write_zeroes": true, 00:09:42.376 "zcopy": true, 00:09:42.376 "get_zone_info": false, 00:09:42.376 "zone_management": false, 00:09:42.376 "zone_append": false, 00:09:42.376 "compare": false, 00:09:42.376 "compare_and_write": false, 00:09:42.376 "abort": true, 00:09:42.376 "seek_hole": false, 00:09:42.376 "seek_data": false, 00:09:42.376 "copy": true, 00:09:42.376 "nvme_iov_md": false 00:09:42.376 }, 00:09:42.376 "memory_domains": [ 00:09:42.376 { 00:09:42.376 "dma_device_id": "system", 00:09:42.376 "dma_device_type": 1 00:09:42.376 }, 00:09:42.376 { 00:09:42.376 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:42.376 "dma_device_type": 2 00:09:42.376 } 00:09:42.376 ], 00:09:42.376 "driver_specific": {} 00:09:42.376 } 00:09:42.376 ] 00:09:42.376 21:36:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.376 21:36:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:42.377 21:36:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:09:42.377 21:36:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:42.377 21:36:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:42.377 21:36:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:42.377 21:36:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:42.377 21:36:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:42.377 21:36:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:42.377 21:36:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:42.377 21:36:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:42.377 21:36:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:42.377 21:36:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:42.377 21:36:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:42.377 21:36:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.377 21:36:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.377 21:36:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.377 21:36:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:42.377 "name": "Existed_Raid", 00:09:42.377 "uuid": "d60439f9-6de7-430e-a989-048516f5b73f", 00:09:42.377 "strip_size_kb": 0, 00:09:42.377 "state": "configuring", 00:09:42.377 "raid_level": "raid1", 00:09:42.377 "superblock": true, 00:09:42.377 "num_base_bdevs": 2, 00:09:42.377 "num_base_bdevs_discovered": 1, 00:09:42.377 "num_base_bdevs_operational": 2, 00:09:42.377 "base_bdevs_list": [ 00:09:42.377 { 00:09:42.377 "name": "BaseBdev1", 00:09:42.377 "uuid": "6df80bf3-63c4-4b72-bd1f-ca0327267ab0", 00:09:42.377 "is_configured": true, 00:09:42.377 "data_offset": 2048, 00:09:42.377 "data_size": 63488 00:09:42.377 }, 00:09:42.377 { 00:09:42.377 "name": "BaseBdev2", 00:09:42.377 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:42.377 "is_configured": false, 00:09:42.377 "data_offset": 0, 00:09:42.377 "data_size": 0 00:09:42.377 } 00:09:42.377 ] 00:09:42.377 }' 00:09:42.377 21:36:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:42.377 21:36:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.946 21:36:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:42.946 21:36:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.946 21:36:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.946 [2024-12-10 21:36:43.464260] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:42.946 [2024-12-10 21:36:43.464407] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:42.946 21:36:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.946 21:36:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:42.946 21:36:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.946 21:36:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.946 [2024-12-10 21:36:43.472283] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:42.946 [2024-12-10 21:36:43.474233] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:42.946 [2024-12-10 21:36:43.474315] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:42.946 21:36:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.946 21:36:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:42.946 21:36:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:42.946 21:36:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:09:42.946 21:36:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:42.946 21:36:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:42.946 21:36:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:42.946 21:36:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:42.946 21:36:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:42.946 21:36:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:42.946 21:36:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:42.946 21:36:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:42.946 21:36:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:42.946 21:36:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:42.946 21:36:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:42.946 21:36:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.946 21:36:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.946 21:36:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.946 21:36:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:42.946 "name": "Existed_Raid", 00:09:42.946 "uuid": "29aeb241-834d-421a-b6a7-937ed6dd2756", 00:09:42.946 "strip_size_kb": 0, 00:09:42.946 "state": "configuring", 00:09:42.946 "raid_level": "raid1", 00:09:42.946 "superblock": true, 00:09:42.946 "num_base_bdevs": 2, 00:09:42.946 "num_base_bdevs_discovered": 1, 00:09:42.946 "num_base_bdevs_operational": 2, 00:09:42.946 "base_bdevs_list": [ 00:09:42.946 { 00:09:42.946 "name": "BaseBdev1", 00:09:42.946 "uuid": "6df80bf3-63c4-4b72-bd1f-ca0327267ab0", 00:09:42.946 "is_configured": true, 00:09:42.946 "data_offset": 2048, 00:09:42.946 "data_size": 63488 00:09:42.946 }, 00:09:42.946 { 00:09:42.946 "name": "BaseBdev2", 00:09:42.946 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:42.946 "is_configured": false, 00:09:42.946 "data_offset": 0, 00:09:42.946 "data_size": 0 00:09:42.946 } 00:09:42.946 ] 00:09:42.946 }' 00:09:42.946 21:36:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:42.946 21:36:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.206 21:36:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:43.206 21:36:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.206 21:36:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.206 [2024-12-10 21:36:43.921675] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:43.206 [2024-12-10 21:36:43.922054] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:43.206 [2024-12-10 21:36:43.922111] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:43.206 [2024-12-10 21:36:43.922439] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:43.206 [2024-12-10 21:36:43.922675] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:43.206 [2024-12-10 21:36:43.922729] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:43.206 BaseBdev2 00:09:43.206 [2024-12-10 21:36:43.922947] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:43.206 21:36:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.206 21:36:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:43.206 21:36:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:43.206 21:36:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:43.206 21:36:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:43.206 21:36:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:43.206 21:36:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:43.206 21:36:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:43.206 21:36:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.206 21:36:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.206 21:36:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.206 21:36:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:43.206 21:36:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.206 21:36:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.206 [ 00:09:43.206 { 00:09:43.206 "name": "BaseBdev2", 00:09:43.206 "aliases": [ 00:09:43.206 "fdcd8625-82ad-41a7-8280-3304a9fd66d4" 00:09:43.206 ], 00:09:43.206 "product_name": "Malloc disk", 00:09:43.206 "block_size": 512, 00:09:43.206 "num_blocks": 65536, 00:09:43.206 "uuid": "fdcd8625-82ad-41a7-8280-3304a9fd66d4", 00:09:43.206 "assigned_rate_limits": { 00:09:43.206 "rw_ios_per_sec": 0, 00:09:43.206 "rw_mbytes_per_sec": 0, 00:09:43.206 "r_mbytes_per_sec": 0, 00:09:43.206 "w_mbytes_per_sec": 0 00:09:43.206 }, 00:09:43.206 "claimed": true, 00:09:43.206 "claim_type": "exclusive_write", 00:09:43.206 "zoned": false, 00:09:43.206 "supported_io_types": { 00:09:43.206 "read": true, 00:09:43.206 "write": true, 00:09:43.206 "unmap": true, 00:09:43.206 "flush": true, 00:09:43.206 "reset": true, 00:09:43.206 "nvme_admin": false, 00:09:43.206 "nvme_io": false, 00:09:43.206 "nvme_io_md": false, 00:09:43.206 "write_zeroes": true, 00:09:43.206 "zcopy": true, 00:09:43.206 "get_zone_info": false, 00:09:43.206 "zone_management": false, 00:09:43.206 "zone_append": false, 00:09:43.206 "compare": false, 00:09:43.206 "compare_and_write": false, 00:09:43.206 "abort": true, 00:09:43.206 "seek_hole": false, 00:09:43.206 "seek_data": false, 00:09:43.206 "copy": true, 00:09:43.206 "nvme_iov_md": false 00:09:43.206 }, 00:09:43.206 "memory_domains": [ 00:09:43.206 { 00:09:43.206 "dma_device_id": "system", 00:09:43.206 "dma_device_type": 1 00:09:43.206 }, 00:09:43.206 { 00:09:43.206 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:43.206 "dma_device_type": 2 00:09:43.206 } 00:09:43.206 ], 00:09:43.206 "driver_specific": {} 00:09:43.206 } 00:09:43.206 ] 00:09:43.206 21:36:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.206 21:36:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:43.206 21:36:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:43.207 21:36:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:43.207 21:36:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:09:43.207 21:36:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:43.207 21:36:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:43.207 21:36:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:43.207 21:36:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:43.207 21:36:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:43.207 21:36:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:43.207 21:36:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:43.207 21:36:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:43.207 21:36:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:43.207 21:36:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:43.207 21:36:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:43.207 21:36:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.207 21:36:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.466 21:36:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.466 21:36:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:43.466 "name": "Existed_Raid", 00:09:43.466 "uuid": "29aeb241-834d-421a-b6a7-937ed6dd2756", 00:09:43.466 "strip_size_kb": 0, 00:09:43.466 "state": "online", 00:09:43.466 "raid_level": "raid1", 00:09:43.466 "superblock": true, 00:09:43.466 "num_base_bdevs": 2, 00:09:43.466 "num_base_bdevs_discovered": 2, 00:09:43.466 "num_base_bdevs_operational": 2, 00:09:43.466 "base_bdevs_list": [ 00:09:43.466 { 00:09:43.466 "name": "BaseBdev1", 00:09:43.466 "uuid": "6df80bf3-63c4-4b72-bd1f-ca0327267ab0", 00:09:43.466 "is_configured": true, 00:09:43.466 "data_offset": 2048, 00:09:43.466 "data_size": 63488 00:09:43.466 }, 00:09:43.466 { 00:09:43.466 "name": "BaseBdev2", 00:09:43.466 "uuid": "fdcd8625-82ad-41a7-8280-3304a9fd66d4", 00:09:43.466 "is_configured": true, 00:09:43.466 "data_offset": 2048, 00:09:43.466 "data_size": 63488 00:09:43.466 } 00:09:43.466 ] 00:09:43.466 }' 00:09:43.466 21:36:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:43.466 21:36:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.726 21:36:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:43.726 21:36:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:43.726 21:36:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:43.726 21:36:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:43.726 21:36:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:43.726 21:36:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:43.726 21:36:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:43.726 21:36:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:43.726 21:36:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.726 21:36:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.726 [2024-12-10 21:36:44.433137] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:43.726 21:36:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.726 21:36:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:43.726 "name": "Existed_Raid", 00:09:43.726 "aliases": [ 00:09:43.726 "29aeb241-834d-421a-b6a7-937ed6dd2756" 00:09:43.726 ], 00:09:43.726 "product_name": "Raid Volume", 00:09:43.726 "block_size": 512, 00:09:43.726 "num_blocks": 63488, 00:09:43.726 "uuid": "29aeb241-834d-421a-b6a7-937ed6dd2756", 00:09:43.726 "assigned_rate_limits": { 00:09:43.726 "rw_ios_per_sec": 0, 00:09:43.726 "rw_mbytes_per_sec": 0, 00:09:43.726 "r_mbytes_per_sec": 0, 00:09:43.726 "w_mbytes_per_sec": 0 00:09:43.726 }, 00:09:43.726 "claimed": false, 00:09:43.726 "zoned": false, 00:09:43.726 "supported_io_types": { 00:09:43.726 "read": true, 00:09:43.726 "write": true, 00:09:43.726 "unmap": false, 00:09:43.726 "flush": false, 00:09:43.726 "reset": true, 00:09:43.726 "nvme_admin": false, 00:09:43.726 "nvme_io": false, 00:09:43.726 "nvme_io_md": false, 00:09:43.726 "write_zeroes": true, 00:09:43.726 "zcopy": false, 00:09:43.726 "get_zone_info": false, 00:09:43.726 "zone_management": false, 00:09:43.726 "zone_append": false, 00:09:43.726 "compare": false, 00:09:43.726 "compare_and_write": false, 00:09:43.726 "abort": false, 00:09:43.726 "seek_hole": false, 00:09:43.726 "seek_data": false, 00:09:43.726 "copy": false, 00:09:43.726 "nvme_iov_md": false 00:09:43.726 }, 00:09:43.726 "memory_domains": [ 00:09:43.726 { 00:09:43.726 "dma_device_id": "system", 00:09:43.726 "dma_device_type": 1 00:09:43.726 }, 00:09:43.726 { 00:09:43.726 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:43.726 "dma_device_type": 2 00:09:43.726 }, 00:09:43.726 { 00:09:43.726 "dma_device_id": "system", 00:09:43.726 "dma_device_type": 1 00:09:43.726 }, 00:09:43.726 { 00:09:43.726 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:43.726 "dma_device_type": 2 00:09:43.726 } 00:09:43.726 ], 00:09:43.726 "driver_specific": { 00:09:43.726 "raid": { 00:09:43.726 "uuid": "29aeb241-834d-421a-b6a7-937ed6dd2756", 00:09:43.726 "strip_size_kb": 0, 00:09:43.726 "state": "online", 00:09:43.726 "raid_level": "raid1", 00:09:43.726 "superblock": true, 00:09:43.726 "num_base_bdevs": 2, 00:09:43.726 "num_base_bdevs_discovered": 2, 00:09:43.726 "num_base_bdevs_operational": 2, 00:09:43.726 "base_bdevs_list": [ 00:09:43.726 { 00:09:43.726 "name": "BaseBdev1", 00:09:43.726 "uuid": "6df80bf3-63c4-4b72-bd1f-ca0327267ab0", 00:09:43.726 "is_configured": true, 00:09:43.726 "data_offset": 2048, 00:09:43.726 "data_size": 63488 00:09:43.726 }, 00:09:43.726 { 00:09:43.726 "name": "BaseBdev2", 00:09:43.726 "uuid": "fdcd8625-82ad-41a7-8280-3304a9fd66d4", 00:09:43.726 "is_configured": true, 00:09:43.726 "data_offset": 2048, 00:09:43.726 "data_size": 63488 00:09:43.726 } 00:09:43.726 ] 00:09:43.726 } 00:09:43.726 } 00:09:43.726 }' 00:09:43.726 21:36:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:43.726 21:36:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:43.726 BaseBdev2' 00:09:43.726 21:36:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:43.987 21:36:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:43.987 21:36:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:43.987 21:36:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:43.987 21:36:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:43.987 21:36:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.987 21:36:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.987 21:36:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.987 21:36:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:43.987 21:36:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:43.987 21:36:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:43.987 21:36:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:43.987 21:36:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.987 21:36:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.987 21:36:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:43.987 21:36:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.987 21:36:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:43.987 21:36:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:43.987 21:36:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:43.987 21:36:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.987 21:36:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.987 [2024-12-10 21:36:44.644583] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:43.987 21:36:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.987 21:36:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:43.987 21:36:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:09:43.987 21:36:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:43.987 21:36:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:09:43.987 21:36:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:09:43.987 21:36:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:09:43.987 21:36:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:43.987 21:36:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:43.987 21:36:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:43.987 21:36:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:43.987 21:36:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:09:43.987 21:36:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:43.987 21:36:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:43.987 21:36:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:43.987 21:36:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:43.987 21:36:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:43.987 21:36:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:43.987 21:36:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.987 21:36:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.987 21:36:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.247 21:36:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:44.247 "name": "Existed_Raid", 00:09:44.247 "uuid": "29aeb241-834d-421a-b6a7-937ed6dd2756", 00:09:44.247 "strip_size_kb": 0, 00:09:44.247 "state": "online", 00:09:44.247 "raid_level": "raid1", 00:09:44.247 "superblock": true, 00:09:44.247 "num_base_bdevs": 2, 00:09:44.247 "num_base_bdevs_discovered": 1, 00:09:44.247 "num_base_bdevs_operational": 1, 00:09:44.247 "base_bdevs_list": [ 00:09:44.247 { 00:09:44.247 "name": null, 00:09:44.247 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:44.247 "is_configured": false, 00:09:44.247 "data_offset": 0, 00:09:44.247 "data_size": 63488 00:09:44.247 }, 00:09:44.247 { 00:09:44.247 "name": "BaseBdev2", 00:09:44.247 "uuid": "fdcd8625-82ad-41a7-8280-3304a9fd66d4", 00:09:44.247 "is_configured": true, 00:09:44.247 "data_offset": 2048, 00:09:44.247 "data_size": 63488 00:09:44.247 } 00:09:44.248 ] 00:09:44.248 }' 00:09:44.248 21:36:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:44.248 21:36:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.507 21:36:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:44.507 21:36:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:44.507 21:36:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:44.507 21:36:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:44.507 21:36:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.507 21:36:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.507 21:36:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.507 21:36:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:44.507 21:36:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:44.507 21:36:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:44.507 21:36:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.507 21:36:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.507 [2024-12-10 21:36:45.212298] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:44.507 [2024-12-10 21:36:45.212408] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:44.766 [2024-12-10 21:36:45.311407] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:44.766 [2024-12-10 21:36:45.311504] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:44.766 [2024-12-10 21:36:45.311517] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:44.766 21:36:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.766 21:36:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:44.766 21:36:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:44.766 21:36:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:44.766 21:36:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.766 21:36:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:44.766 21:36:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.767 21:36:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.767 21:36:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:44.767 21:36:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:44.767 21:36:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:09:44.767 21:36:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 63041 00:09:44.767 21:36:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 63041 ']' 00:09:44.767 21:36:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 63041 00:09:44.767 21:36:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:09:44.767 21:36:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:44.767 21:36:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63041 00:09:44.767 killing process with pid 63041 00:09:44.767 21:36:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:44.767 21:36:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:44.767 21:36:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63041' 00:09:44.767 21:36:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 63041 00:09:44.767 [2024-12-10 21:36:45.405601] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:44.767 21:36:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 63041 00:09:44.767 [2024-12-10 21:36:45.423267] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:46.146 21:36:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:09:46.146 00:09:46.146 real 0m5.107s 00:09:46.146 user 0m7.303s 00:09:46.146 sys 0m0.838s 00:09:46.146 21:36:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:46.146 21:36:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.146 ************************************ 00:09:46.146 END TEST raid_state_function_test_sb 00:09:46.146 ************************************ 00:09:46.146 21:36:46 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 2 00:09:46.146 21:36:46 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:46.146 21:36:46 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:46.146 21:36:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:46.146 ************************************ 00:09:46.146 START TEST raid_superblock_test 00:09:46.146 ************************************ 00:09:46.146 21:36:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:09:46.146 21:36:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:09:46.146 21:36:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:09:46.146 21:36:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:09:46.146 21:36:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:09:46.146 21:36:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:09:46.146 21:36:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:09:46.146 21:36:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:09:46.146 21:36:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:09:46.146 21:36:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:09:46.146 21:36:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:09:46.146 21:36:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:09:46.146 21:36:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:09:46.146 21:36:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:09:46.146 21:36:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:09:46.146 21:36:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:09:46.146 21:36:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=63293 00:09:46.146 21:36:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 63293 00:09:46.146 21:36:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 63293 ']' 00:09:46.146 21:36:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:46.146 21:36:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:46.146 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:46.146 21:36:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:46.146 21:36:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:09:46.146 21:36:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:46.146 21:36:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.146 [2024-12-10 21:36:46.785835] Starting SPDK v25.01-pre git sha1 cec5ba284 / DPDK 24.03.0 initialization... 00:09:46.146 [2024-12-10 21:36:46.785960] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63293 ] 00:09:46.406 [2024-12-10 21:36:46.961557] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:46.406 [2024-12-10 21:36:47.093989] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:46.665 [2024-12-10 21:36:47.316911] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:46.665 [2024-12-10 21:36:47.316980] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:47.012 21:36:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:47.012 21:36:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:09:47.012 21:36:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:09:47.012 21:36:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:47.012 21:36:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:09:47.012 21:36:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:09:47.012 21:36:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:09:47.012 21:36:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:47.012 21:36:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:47.012 21:36:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:47.012 21:36:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:09:47.012 21:36:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.012 21:36:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.012 malloc1 00:09:47.012 21:36:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.012 21:36:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:47.012 21:36:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.012 21:36:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.012 [2024-12-10 21:36:47.713025] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:47.012 [2024-12-10 21:36:47.713095] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:47.012 [2024-12-10 21:36:47.713119] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:09:47.012 [2024-12-10 21:36:47.713128] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:47.012 [2024-12-10 21:36:47.715398] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:47.012 [2024-12-10 21:36:47.715445] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:47.012 pt1 00:09:47.012 21:36:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.012 21:36:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:47.012 21:36:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:47.012 21:36:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:09:47.012 21:36:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:09:47.012 21:36:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:09:47.012 21:36:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:47.012 21:36:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:47.012 21:36:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:47.012 21:36:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:09:47.012 21:36:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.012 21:36:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.296 malloc2 00:09:47.296 21:36:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.296 21:36:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:47.296 21:36:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.296 21:36:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.296 [2024-12-10 21:36:47.769041] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:47.296 [2024-12-10 21:36:47.769168] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:47.296 [2024-12-10 21:36:47.769216] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:09:47.296 [2024-12-10 21:36:47.769251] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:47.296 [2024-12-10 21:36:47.771474] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:47.296 [2024-12-10 21:36:47.771543] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:47.296 pt2 00:09:47.296 21:36:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.296 21:36:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:47.296 21:36:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:47.296 21:36:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:09:47.296 21:36:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.296 21:36:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.296 [2024-12-10 21:36:47.781064] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:47.296 [2024-12-10 21:36:47.782900] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:47.296 [2024-12-10 21:36:47.783117] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:09:47.296 [2024-12-10 21:36:47.783168] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:47.296 [2024-12-10 21:36:47.783469] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:47.296 [2024-12-10 21:36:47.783700] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:09:47.296 [2024-12-10 21:36:47.783754] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:09:47.296 [2024-12-10 21:36:47.783976] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:47.296 21:36:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.296 21:36:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:47.296 21:36:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:47.296 21:36:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:47.296 21:36:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:47.296 21:36:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:47.297 21:36:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:47.297 21:36:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:47.297 21:36:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:47.297 21:36:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:47.297 21:36:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:47.297 21:36:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:47.297 21:36:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.297 21:36:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.297 21:36:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:47.297 21:36:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.297 21:36:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:47.297 "name": "raid_bdev1", 00:09:47.297 "uuid": "f3dd569c-fe7e-4ab6-8e03-30b092309950", 00:09:47.297 "strip_size_kb": 0, 00:09:47.297 "state": "online", 00:09:47.297 "raid_level": "raid1", 00:09:47.297 "superblock": true, 00:09:47.297 "num_base_bdevs": 2, 00:09:47.297 "num_base_bdevs_discovered": 2, 00:09:47.297 "num_base_bdevs_operational": 2, 00:09:47.297 "base_bdevs_list": [ 00:09:47.297 { 00:09:47.297 "name": "pt1", 00:09:47.297 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:47.297 "is_configured": true, 00:09:47.297 "data_offset": 2048, 00:09:47.297 "data_size": 63488 00:09:47.297 }, 00:09:47.297 { 00:09:47.297 "name": "pt2", 00:09:47.297 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:47.297 "is_configured": true, 00:09:47.297 "data_offset": 2048, 00:09:47.297 "data_size": 63488 00:09:47.297 } 00:09:47.297 ] 00:09:47.297 }' 00:09:47.297 21:36:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:47.297 21:36:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.556 21:36:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:09:47.556 21:36:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:47.557 21:36:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:47.557 21:36:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:47.557 21:36:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:47.557 21:36:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:47.557 21:36:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:47.557 21:36:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:47.557 21:36:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.557 21:36:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.557 [2024-12-10 21:36:48.236826] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:47.557 21:36:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.557 21:36:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:47.557 "name": "raid_bdev1", 00:09:47.557 "aliases": [ 00:09:47.557 "f3dd569c-fe7e-4ab6-8e03-30b092309950" 00:09:47.557 ], 00:09:47.557 "product_name": "Raid Volume", 00:09:47.557 "block_size": 512, 00:09:47.557 "num_blocks": 63488, 00:09:47.557 "uuid": "f3dd569c-fe7e-4ab6-8e03-30b092309950", 00:09:47.557 "assigned_rate_limits": { 00:09:47.557 "rw_ios_per_sec": 0, 00:09:47.557 "rw_mbytes_per_sec": 0, 00:09:47.557 "r_mbytes_per_sec": 0, 00:09:47.557 "w_mbytes_per_sec": 0 00:09:47.557 }, 00:09:47.557 "claimed": false, 00:09:47.557 "zoned": false, 00:09:47.557 "supported_io_types": { 00:09:47.557 "read": true, 00:09:47.557 "write": true, 00:09:47.557 "unmap": false, 00:09:47.557 "flush": false, 00:09:47.557 "reset": true, 00:09:47.557 "nvme_admin": false, 00:09:47.557 "nvme_io": false, 00:09:47.557 "nvme_io_md": false, 00:09:47.557 "write_zeroes": true, 00:09:47.557 "zcopy": false, 00:09:47.557 "get_zone_info": false, 00:09:47.557 "zone_management": false, 00:09:47.557 "zone_append": false, 00:09:47.557 "compare": false, 00:09:47.557 "compare_and_write": false, 00:09:47.557 "abort": false, 00:09:47.557 "seek_hole": false, 00:09:47.557 "seek_data": false, 00:09:47.557 "copy": false, 00:09:47.557 "nvme_iov_md": false 00:09:47.557 }, 00:09:47.557 "memory_domains": [ 00:09:47.557 { 00:09:47.557 "dma_device_id": "system", 00:09:47.557 "dma_device_type": 1 00:09:47.557 }, 00:09:47.557 { 00:09:47.557 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:47.557 "dma_device_type": 2 00:09:47.557 }, 00:09:47.557 { 00:09:47.557 "dma_device_id": "system", 00:09:47.557 "dma_device_type": 1 00:09:47.557 }, 00:09:47.557 { 00:09:47.557 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:47.557 "dma_device_type": 2 00:09:47.557 } 00:09:47.557 ], 00:09:47.557 "driver_specific": { 00:09:47.557 "raid": { 00:09:47.557 "uuid": "f3dd569c-fe7e-4ab6-8e03-30b092309950", 00:09:47.557 "strip_size_kb": 0, 00:09:47.557 "state": "online", 00:09:47.557 "raid_level": "raid1", 00:09:47.557 "superblock": true, 00:09:47.557 "num_base_bdevs": 2, 00:09:47.557 "num_base_bdevs_discovered": 2, 00:09:47.557 "num_base_bdevs_operational": 2, 00:09:47.557 "base_bdevs_list": [ 00:09:47.557 { 00:09:47.557 "name": "pt1", 00:09:47.557 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:47.557 "is_configured": true, 00:09:47.557 "data_offset": 2048, 00:09:47.557 "data_size": 63488 00:09:47.557 }, 00:09:47.557 { 00:09:47.557 "name": "pt2", 00:09:47.557 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:47.557 "is_configured": true, 00:09:47.557 "data_offset": 2048, 00:09:47.557 "data_size": 63488 00:09:47.557 } 00:09:47.557 ] 00:09:47.557 } 00:09:47.557 } 00:09:47.557 }' 00:09:47.557 21:36:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:47.557 21:36:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:47.557 pt2' 00:09:47.557 21:36:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:47.817 21:36:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:47.817 21:36:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:47.817 21:36:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:47.817 21:36:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.817 21:36:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.817 21:36:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:47.817 21:36:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.817 21:36:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:47.817 21:36:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:47.817 21:36:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:47.817 21:36:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:47.817 21:36:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:47.817 21:36:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.817 21:36:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.817 21:36:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.817 21:36:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:47.817 21:36:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:47.817 21:36:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:47.817 21:36:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:09:47.817 21:36:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.817 21:36:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.817 [2024-12-10 21:36:48.484180] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:47.817 21:36:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.817 21:36:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=f3dd569c-fe7e-4ab6-8e03-30b092309950 00:09:47.817 21:36:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z f3dd569c-fe7e-4ab6-8e03-30b092309950 ']' 00:09:47.817 21:36:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:47.817 21:36:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.817 21:36:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.817 [2024-12-10 21:36:48.527789] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:47.817 [2024-12-10 21:36:48.527823] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:47.817 [2024-12-10 21:36:48.527951] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:47.817 [2024-12-10 21:36:48.528021] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:47.817 [2024-12-10 21:36:48.528037] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:09:47.817 21:36:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.817 21:36:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:47.817 21:36:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.817 21:36:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.817 21:36:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:09:47.817 21:36:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.817 21:36:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:09:47.817 21:36:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:09:47.817 21:36:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:47.817 21:36:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:09:47.817 21:36:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.817 21:36:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.817 21:36:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.817 21:36:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:47.817 21:36:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:09:47.817 21:36:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.817 21:36:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.817 21:36:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.078 21:36:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:09:48.078 21:36:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:09:48.078 21:36:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.078 21:36:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.078 21:36:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.078 21:36:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:09:48.078 21:36:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:09:48.078 21:36:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:09:48.078 21:36:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:09:48.078 21:36:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:09:48.078 21:36:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:48.078 21:36:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:09:48.078 21:36:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:48.078 21:36:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:09:48.078 21:36:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.078 21:36:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.078 [2024-12-10 21:36:48.659631] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:09:48.078 [2024-12-10 21:36:48.661673] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:09:48.078 [2024-12-10 21:36:48.661743] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:09:48.078 [2024-12-10 21:36:48.661804] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:09:48.078 [2024-12-10 21:36:48.661821] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:48.078 [2024-12-10 21:36:48.661832] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:09:48.078 request: 00:09:48.078 { 00:09:48.078 "name": "raid_bdev1", 00:09:48.078 "raid_level": "raid1", 00:09:48.078 "base_bdevs": [ 00:09:48.078 "malloc1", 00:09:48.078 "malloc2" 00:09:48.078 ], 00:09:48.078 "superblock": false, 00:09:48.078 "method": "bdev_raid_create", 00:09:48.078 "req_id": 1 00:09:48.078 } 00:09:48.078 Got JSON-RPC error response 00:09:48.078 response: 00:09:48.078 { 00:09:48.078 "code": -17, 00:09:48.078 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:09:48.078 } 00:09:48.078 21:36:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:09:48.078 21:36:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:09:48.078 21:36:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:48.078 21:36:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:48.078 21:36:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:48.078 21:36:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:48.078 21:36:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:09:48.078 21:36:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.078 21:36:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.078 21:36:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.078 21:36:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:09:48.078 21:36:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:09:48.078 21:36:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:48.078 21:36:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.078 21:36:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.078 [2024-12-10 21:36:48.727543] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:48.078 [2024-12-10 21:36:48.727682] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:48.078 [2024-12-10 21:36:48.727708] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:09:48.078 [2024-12-10 21:36:48.727721] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:48.078 [2024-12-10 21:36:48.730220] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:48.078 [2024-12-10 21:36:48.730265] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:48.078 [2024-12-10 21:36:48.730363] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:48.078 [2024-12-10 21:36:48.730434] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:48.078 pt1 00:09:48.078 21:36:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.078 21:36:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:09:48.078 21:36:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:48.078 21:36:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:48.078 21:36:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:48.078 21:36:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:48.078 21:36:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:48.078 21:36:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:48.078 21:36:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:48.078 21:36:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:48.078 21:36:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:48.078 21:36:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:48.078 21:36:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.078 21:36:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.078 21:36:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:48.078 21:36:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.078 21:36:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:48.078 "name": "raid_bdev1", 00:09:48.078 "uuid": "f3dd569c-fe7e-4ab6-8e03-30b092309950", 00:09:48.078 "strip_size_kb": 0, 00:09:48.078 "state": "configuring", 00:09:48.078 "raid_level": "raid1", 00:09:48.078 "superblock": true, 00:09:48.078 "num_base_bdevs": 2, 00:09:48.078 "num_base_bdevs_discovered": 1, 00:09:48.078 "num_base_bdevs_operational": 2, 00:09:48.078 "base_bdevs_list": [ 00:09:48.078 { 00:09:48.078 "name": "pt1", 00:09:48.078 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:48.078 "is_configured": true, 00:09:48.079 "data_offset": 2048, 00:09:48.079 "data_size": 63488 00:09:48.079 }, 00:09:48.079 { 00:09:48.079 "name": null, 00:09:48.079 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:48.079 "is_configured": false, 00:09:48.079 "data_offset": 2048, 00:09:48.079 "data_size": 63488 00:09:48.079 } 00:09:48.079 ] 00:09:48.079 }' 00:09:48.079 21:36:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:48.079 21:36:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.648 21:36:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:09:48.648 21:36:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:09:48.648 21:36:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:48.648 21:36:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:48.648 21:36:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.648 21:36:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.648 [2024-12-10 21:36:49.190737] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:48.648 [2024-12-10 21:36:49.190888] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:48.648 [2024-12-10 21:36:49.190940] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:09:48.648 [2024-12-10 21:36:49.191008] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:48.648 [2024-12-10 21:36:49.191595] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:48.648 [2024-12-10 21:36:49.191684] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:48.648 [2024-12-10 21:36:49.191811] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:48.648 [2024-12-10 21:36:49.191875] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:48.648 [2024-12-10 21:36:49.192037] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:48.648 [2024-12-10 21:36:49.192084] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:48.648 [2024-12-10 21:36:49.192388] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:48.648 [2024-12-10 21:36:49.192612] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:48.648 [2024-12-10 21:36:49.192658] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:48.648 [2024-12-10 21:36:49.192873] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:48.648 pt2 00:09:48.648 21:36:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.648 21:36:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:48.648 21:36:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:48.648 21:36:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:48.648 21:36:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:48.648 21:36:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:48.648 21:36:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:48.648 21:36:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:48.648 21:36:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:48.648 21:36:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:48.648 21:36:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:48.648 21:36:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:48.648 21:36:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:48.648 21:36:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:48.648 21:36:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:48.648 21:36:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.648 21:36:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.648 21:36:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.648 21:36:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:48.648 "name": "raid_bdev1", 00:09:48.648 "uuid": "f3dd569c-fe7e-4ab6-8e03-30b092309950", 00:09:48.648 "strip_size_kb": 0, 00:09:48.648 "state": "online", 00:09:48.648 "raid_level": "raid1", 00:09:48.648 "superblock": true, 00:09:48.648 "num_base_bdevs": 2, 00:09:48.648 "num_base_bdevs_discovered": 2, 00:09:48.648 "num_base_bdevs_operational": 2, 00:09:48.648 "base_bdevs_list": [ 00:09:48.648 { 00:09:48.648 "name": "pt1", 00:09:48.648 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:48.648 "is_configured": true, 00:09:48.648 "data_offset": 2048, 00:09:48.648 "data_size": 63488 00:09:48.648 }, 00:09:48.648 { 00:09:48.648 "name": "pt2", 00:09:48.648 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:48.648 "is_configured": true, 00:09:48.648 "data_offset": 2048, 00:09:48.648 "data_size": 63488 00:09:48.648 } 00:09:48.648 ] 00:09:48.648 }' 00:09:48.648 21:36:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:48.648 21:36:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.907 21:36:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:09:48.907 21:36:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:48.907 21:36:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:48.907 21:36:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:48.907 21:36:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:48.907 21:36:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:48.907 21:36:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:48.907 21:36:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:48.907 21:36:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.907 21:36:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.907 [2024-12-10 21:36:49.670198] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:49.167 21:36:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.167 21:36:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:49.167 "name": "raid_bdev1", 00:09:49.167 "aliases": [ 00:09:49.167 "f3dd569c-fe7e-4ab6-8e03-30b092309950" 00:09:49.167 ], 00:09:49.167 "product_name": "Raid Volume", 00:09:49.167 "block_size": 512, 00:09:49.167 "num_blocks": 63488, 00:09:49.167 "uuid": "f3dd569c-fe7e-4ab6-8e03-30b092309950", 00:09:49.167 "assigned_rate_limits": { 00:09:49.167 "rw_ios_per_sec": 0, 00:09:49.167 "rw_mbytes_per_sec": 0, 00:09:49.167 "r_mbytes_per_sec": 0, 00:09:49.167 "w_mbytes_per_sec": 0 00:09:49.167 }, 00:09:49.167 "claimed": false, 00:09:49.167 "zoned": false, 00:09:49.167 "supported_io_types": { 00:09:49.167 "read": true, 00:09:49.167 "write": true, 00:09:49.167 "unmap": false, 00:09:49.167 "flush": false, 00:09:49.167 "reset": true, 00:09:49.167 "nvme_admin": false, 00:09:49.167 "nvme_io": false, 00:09:49.167 "nvme_io_md": false, 00:09:49.167 "write_zeroes": true, 00:09:49.167 "zcopy": false, 00:09:49.167 "get_zone_info": false, 00:09:49.167 "zone_management": false, 00:09:49.167 "zone_append": false, 00:09:49.167 "compare": false, 00:09:49.167 "compare_and_write": false, 00:09:49.167 "abort": false, 00:09:49.167 "seek_hole": false, 00:09:49.167 "seek_data": false, 00:09:49.167 "copy": false, 00:09:49.167 "nvme_iov_md": false 00:09:49.167 }, 00:09:49.167 "memory_domains": [ 00:09:49.167 { 00:09:49.167 "dma_device_id": "system", 00:09:49.167 "dma_device_type": 1 00:09:49.167 }, 00:09:49.167 { 00:09:49.167 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:49.167 "dma_device_type": 2 00:09:49.167 }, 00:09:49.167 { 00:09:49.167 "dma_device_id": "system", 00:09:49.167 "dma_device_type": 1 00:09:49.167 }, 00:09:49.167 { 00:09:49.167 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:49.167 "dma_device_type": 2 00:09:49.167 } 00:09:49.167 ], 00:09:49.167 "driver_specific": { 00:09:49.167 "raid": { 00:09:49.167 "uuid": "f3dd569c-fe7e-4ab6-8e03-30b092309950", 00:09:49.167 "strip_size_kb": 0, 00:09:49.167 "state": "online", 00:09:49.167 "raid_level": "raid1", 00:09:49.167 "superblock": true, 00:09:49.167 "num_base_bdevs": 2, 00:09:49.167 "num_base_bdevs_discovered": 2, 00:09:49.167 "num_base_bdevs_operational": 2, 00:09:49.167 "base_bdevs_list": [ 00:09:49.167 { 00:09:49.167 "name": "pt1", 00:09:49.167 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:49.167 "is_configured": true, 00:09:49.167 "data_offset": 2048, 00:09:49.167 "data_size": 63488 00:09:49.167 }, 00:09:49.167 { 00:09:49.167 "name": "pt2", 00:09:49.167 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:49.167 "is_configured": true, 00:09:49.167 "data_offset": 2048, 00:09:49.167 "data_size": 63488 00:09:49.167 } 00:09:49.167 ] 00:09:49.167 } 00:09:49.167 } 00:09:49.167 }' 00:09:49.167 21:36:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:49.167 21:36:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:49.167 pt2' 00:09:49.167 21:36:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:49.167 21:36:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:49.167 21:36:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:49.167 21:36:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:49.167 21:36:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.167 21:36:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.167 21:36:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:49.167 21:36:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.167 21:36:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:49.167 21:36:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:49.167 21:36:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:49.167 21:36:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:49.167 21:36:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:49.167 21:36:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.167 21:36:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.167 21:36:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.167 21:36:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:49.167 21:36:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:49.167 21:36:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:49.167 21:36:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.167 21:36:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.167 21:36:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:09:49.167 [2024-12-10 21:36:49.889831] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:49.167 21:36:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.167 21:36:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' f3dd569c-fe7e-4ab6-8e03-30b092309950 '!=' f3dd569c-fe7e-4ab6-8e03-30b092309950 ']' 00:09:49.167 21:36:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:09:49.167 21:36:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:49.167 21:36:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:09:49.167 21:36:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:09:49.168 21:36:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.168 21:36:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.168 [2024-12-10 21:36:49.937517] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:09:49.168 21:36:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.168 21:36:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:09:49.168 21:36:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:49.168 21:36:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:49.168 21:36:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:49.168 21:36:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:49.168 21:36:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:09:49.168 21:36:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:49.168 21:36:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:49.168 21:36:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:49.168 21:36:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:49.168 21:36:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:49.427 21:36:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.427 21:36:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:49.427 21:36:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.427 21:36:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.427 21:36:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:49.427 "name": "raid_bdev1", 00:09:49.427 "uuid": "f3dd569c-fe7e-4ab6-8e03-30b092309950", 00:09:49.427 "strip_size_kb": 0, 00:09:49.427 "state": "online", 00:09:49.427 "raid_level": "raid1", 00:09:49.427 "superblock": true, 00:09:49.427 "num_base_bdevs": 2, 00:09:49.427 "num_base_bdevs_discovered": 1, 00:09:49.427 "num_base_bdevs_operational": 1, 00:09:49.427 "base_bdevs_list": [ 00:09:49.427 { 00:09:49.427 "name": null, 00:09:49.427 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:49.427 "is_configured": false, 00:09:49.427 "data_offset": 0, 00:09:49.427 "data_size": 63488 00:09:49.427 }, 00:09:49.427 { 00:09:49.427 "name": "pt2", 00:09:49.427 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:49.427 "is_configured": true, 00:09:49.427 "data_offset": 2048, 00:09:49.427 "data_size": 63488 00:09:49.427 } 00:09:49.427 ] 00:09:49.427 }' 00:09:49.427 21:36:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:49.427 21:36:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.686 21:36:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:49.686 21:36:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.686 21:36:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.686 [2024-12-10 21:36:50.392714] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:49.686 [2024-12-10 21:36:50.392809] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:49.686 [2024-12-10 21:36:50.392927] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:49.686 [2024-12-10 21:36:50.393005] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:49.686 [2024-12-10 21:36:50.393056] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:49.686 21:36:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.686 21:36:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:49.686 21:36:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:09:49.686 21:36:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.686 21:36:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.686 21:36:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.686 21:36:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:09:49.686 21:36:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:09:49.686 21:36:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:09:49.686 21:36:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:09:49.686 21:36:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:09:49.686 21:36:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.686 21:36:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.686 21:36:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.686 21:36:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:09:49.686 21:36:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:09:49.686 21:36:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:09:49.686 21:36:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:09:49.686 21:36:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=1 00:09:49.686 21:36:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:49.686 21:36:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.686 21:36:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.686 [2024-12-10 21:36:50.464559] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:49.686 [2024-12-10 21:36:50.464629] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:49.686 [2024-12-10 21:36:50.464649] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:49.686 [2024-12-10 21:36:50.464660] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:49.686 [2024-12-10 21:36:50.467060] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:49.946 [2024-12-10 21:36:50.467152] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:49.946 [2024-12-10 21:36:50.467249] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:49.946 [2024-12-10 21:36:50.467310] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:49.946 [2024-12-10 21:36:50.467438] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:49.946 [2024-12-10 21:36:50.467453] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:49.946 [2024-12-10 21:36:50.467724] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:09:49.946 [2024-12-10 21:36:50.467892] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:49.946 [2024-12-10 21:36:50.467903] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:09:49.946 [2024-12-10 21:36:50.468075] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:49.946 pt2 00:09:49.946 21:36:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.946 21:36:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:09:49.946 21:36:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:49.946 21:36:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:49.946 21:36:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:49.946 21:36:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:49.946 21:36:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:09:49.946 21:36:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:49.946 21:36:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:49.946 21:36:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:49.946 21:36:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:49.946 21:36:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:49.946 21:36:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.946 21:36:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.946 21:36:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:49.946 21:36:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.946 21:36:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:49.946 "name": "raid_bdev1", 00:09:49.946 "uuid": "f3dd569c-fe7e-4ab6-8e03-30b092309950", 00:09:49.946 "strip_size_kb": 0, 00:09:49.946 "state": "online", 00:09:49.946 "raid_level": "raid1", 00:09:49.946 "superblock": true, 00:09:49.946 "num_base_bdevs": 2, 00:09:49.946 "num_base_bdevs_discovered": 1, 00:09:49.946 "num_base_bdevs_operational": 1, 00:09:49.946 "base_bdevs_list": [ 00:09:49.946 { 00:09:49.946 "name": null, 00:09:49.946 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:49.946 "is_configured": false, 00:09:49.946 "data_offset": 2048, 00:09:49.946 "data_size": 63488 00:09:49.946 }, 00:09:49.946 { 00:09:49.946 "name": "pt2", 00:09:49.946 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:49.946 "is_configured": true, 00:09:49.946 "data_offset": 2048, 00:09:49.946 "data_size": 63488 00:09:49.946 } 00:09:49.946 ] 00:09:49.946 }' 00:09:49.946 21:36:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:49.946 21:36:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.205 21:36:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:50.205 21:36:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.205 21:36:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.205 [2024-12-10 21:36:50.935774] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:50.205 [2024-12-10 21:36:50.935886] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:50.205 [2024-12-10 21:36:50.936005] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:50.205 [2024-12-10 21:36:50.936099] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:50.205 [2024-12-10 21:36:50.936150] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:09:50.205 21:36:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.205 21:36:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:50.205 21:36:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.205 21:36:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:09:50.205 21:36:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.205 21:36:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.464 21:36:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:09:50.464 21:36:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:09:50.464 21:36:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:09:50.464 21:36:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:50.464 21:36:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.464 21:36:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.464 [2024-12-10 21:36:50.995732] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:50.464 [2024-12-10 21:36:50.995871] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:50.464 [2024-12-10 21:36:50.995938] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:09:50.464 [2024-12-10 21:36:50.995980] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:50.464 [2024-12-10 21:36:50.998401] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:50.464 [2024-12-10 21:36:50.998509] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:50.464 [2024-12-10 21:36:50.998641] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:50.464 [2024-12-10 21:36:50.998723] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:50.464 [2024-12-10 21:36:50.998933] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:09:50.464 [2024-12-10 21:36:50.998994] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:50.464 [2024-12-10 21:36:50.999031] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:09:50.464 [2024-12-10 21:36:50.999137] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:50.464 [2024-12-10 21:36:50.999249] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:09:50.464 [2024-12-10 21:36:50.999289] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:50.464 [2024-12-10 21:36:50.999594] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:09:50.464 [2024-12-10 21:36:50.999812] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:09:50.464 [2024-12-10 21:36:50.999865] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:09:50.464 [2024-12-10 21:36:51.000119] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:50.464 pt1 00:09:50.464 21:36:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.464 21:36:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:09:50.464 21:36:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:09:50.464 21:36:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:50.464 21:36:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:50.464 21:36:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:50.464 21:36:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:50.464 21:36:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:09:50.464 21:36:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:50.464 21:36:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:50.464 21:36:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:50.464 21:36:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:50.464 21:36:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:50.464 21:36:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.464 21:36:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.464 21:36:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:50.464 21:36:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.464 21:36:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:50.464 "name": "raid_bdev1", 00:09:50.464 "uuid": "f3dd569c-fe7e-4ab6-8e03-30b092309950", 00:09:50.464 "strip_size_kb": 0, 00:09:50.464 "state": "online", 00:09:50.464 "raid_level": "raid1", 00:09:50.464 "superblock": true, 00:09:50.464 "num_base_bdevs": 2, 00:09:50.464 "num_base_bdevs_discovered": 1, 00:09:50.464 "num_base_bdevs_operational": 1, 00:09:50.464 "base_bdevs_list": [ 00:09:50.464 { 00:09:50.464 "name": null, 00:09:50.464 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:50.464 "is_configured": false, 00:09:50.464 "data_offset": 2048, 00:09:50.464 "data_size": 63488 00:09:50.464 }, 00:09:50.464 { 00:09:50.464 "name": "pt2", 00:09:50.464 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:50.464 "is_configured": true, 00:09:50.464 "data_offset": 2048, 00:09:50.464 "data_size": 63488 00:09:50.464 } 00:09:50.464 ] 00:09:50.464 }' 00:09:50.464 21:36:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:50.464 21:36:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.723 21:36:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:09:50.723 21:36:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.723 21:36:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.723 21:36:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:09:50.723 21:36:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.005 21:36:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:09:51.005 21:36:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:09:51.005 21:36:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:51.005 21:36:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.005 21:36:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.005 [2024-12-10 21:36:51.547564] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:51.005 21:36:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.005 21:36:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' f3dd569c-fe7e-4ab6-8e03-30b092309950 '!=' f3dd569c-fe7e-4ab6-8e03-30b092309950 ']' 00:09:51.005 21:36:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 63293 00:09:51.005 21:36:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 63293 ']' 00:09:51.005 21:36:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 63293 00:09:51.005 21:36:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:09:51.005 21:36:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:51.005 21:36:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63293 00:09:51.005 21:36:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:51.005 21:36:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:51.005 21:36:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63293' 00:09:51.005 killing process with pid 63293 00:09:51.005 21:36:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 63293 00:09:51.005 [2024-12-10 21:36:51.633334] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:51.005 [2024-12-10 21:36:51.633539] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:51.005 21:36:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 63293 00:09:51.005 [2024-12-10 21:36:51.633635] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:51.005 [2024-12-10 21:36:51.633668] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:09:51.268 [2024-12-10 21:36:51.859665] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:52.643 21:36:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:09:52.643 00:09:52.643 real 0m6.371s 00:09:52.643 user 0m9.715s 00:09:52.643 sys 0m0.998s 00:09:52.643 21:36:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:52.643 21:36:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.643 ************************************ 00:09:52.643 END TEST raid_superblock_test 00:09:52.643 ************************************ 00:09:52.643 21:36:53 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 2 read 00:09:52.643 21:36:53 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:52.643 21:36:53 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:52.643 21:36:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:52.643 ************************************ 00:09:52.643 START TEST raid_read_error_test 00:09:52.643 ************************************ 00:09:52.643 21:36:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 2 read 00:09:52.643 21:36:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:09:52.643 21:36:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:09:52.643 21:36:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:09:52.643 21:36:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:52.643 21:36:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:52.643 21:36:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:52.643 21:36:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:52.643 21:36:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:52.643 21:36:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:52.643 21:36:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:52.643 21:36:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:52.643 21:36:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:09:52.643 21:36:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:52.643 21:36:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:52.643 21:36:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:52.643 21:36:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:52.643 21:36:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:52.643 21:36:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:52.644 21:36:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:09:52.644 21:36:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:09:52.644 21:36:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:52.644 21:36:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.SSpfvPhWbm 00:09:52.644 21:36:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=63630 00:09:52.644 21:36:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:52.644 21:36:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 63630 00:09:52.644 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:52.644 21:36:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 63630 ']' 00:09:52.644 21:36:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:52.644 21:36:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:52.644 21:36:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:52.644 21:36:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:52.644 21:36:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.644 [2024-12-10 21:36:53.242552] Starting SPDK v25.01-pre git sha1 cec5ba284 / DPDK 24.03.0 initialization... 00:09:52.644 [2024-12-10 21:36:53.242672] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63630 ] 00:09:52.644 [2024-12-10 21:36:53.417275] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:52.902 [2024-12-10 21:36:53.544421] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:53.160 [2024-12-10 21:36:53.765570] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:53.160 [2024-12-10 21:36:53.765658] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:53.418 21:36:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:53.418 21:36:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:09:53.418 21:36:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:53.418 21:36:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:53.418 21:36:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.418 21:36:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.418 BaseBdev1_malloc 00:09:53.418 21:36:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.418 21:36:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:53.418 21:36:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.418 21:36:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.418 true 00:09:53.418 21:36:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.418 21:36:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:53.418 21:36:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.418 21:36:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.418 [2024-12-10 21:36:54.149531] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:53.418 [2024-12-10 21:36:54.149596] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:53.418 [2024-12-10 21:36:54.149621] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:53.418 [2024-12-10 21:36:54.149634] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:53.418 [2024-12-10 21:36:54.152014] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:53.418 [2024-12-10 21:36:54.152062] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:53.418 BaseBdev1 00:09:53.418 21:36:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.418 21:36:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:53.418 21:36:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:53.418 21:36:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.418 21:36:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.677 BaseBdev2_malloc 00:09:53.677 21:36:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.677 21:36:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:53.677 21:36:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.677 21:36:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.677 true 00:09:53.677 21:36:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.677 21:36:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:53.677 21:36:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.677 21:36:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.677 [2024-12-10 21:36:54.220009] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:53.677 [2024-12-10 21:36:54.220079] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:53.677 [2024-12-10 21:36:54.220102] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:53.677 [2024-12-10 21:36:54.220115] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:53.677 [2024-12-10 21:36:54.222574] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:53.677 [2024-12-10 21:36:54.222614] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:53.677 BaseBdev2 00:09:53.677 21:36:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.677 21:36:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:09:53.677 21:36:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.677 21:36:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.677 [2024-12-10 21:36:54.232042] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:53.677 [2024-12-10 21:36:54.234173] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:53.677 [2024-12-10 21:36:54.234394] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:53.677 [2024-12-10 21:36:54.234412] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:53.677 [2024-12-10 21:36:54.234706] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:53.678 [2024-12-10 21:36:54.234924] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:53.678 [2024-12-10 21:36:54.234943] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:53.678 [2024-12-10 21:36:54.235138] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:53.678 21:36:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.678 21:36:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:53.678 21:36:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:53.678 21:36:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:53.678 21:36:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:53.678 21:36:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:53.678 21:36:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:53.678 21:36:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:53.678 21:36:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:53.678 21:36:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:53.678 21:36:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:53.678 21:36:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:53.678 21:36:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:53.678 21:36:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.678 21:36:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.678 21:36:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.678 21:36:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:53.678 "name": "raid_bdev1", 00:09:53.678 "uuid": "39d96729-ba45-4510-8ee3-a8403bc021e6", 00:09:53.678 "strip_size_kb": 0, 00:09:53.678 "state": "online", 00:09:53.678 "raid_level": "raid1", 00:09:53.678 "superblock": true, 00:09:53.678 "num_base_bdevs": 2, 00:09:53.678 "num_base_bdevs_discovered": 2, 00:09:53.678 "num_base_bdevs_operational": 2, 00:09:53.678 "base_bdevs_list": [ 00:09:53.678 { 00:09:53.678 "name": "BaseBdev1", 00:09:53.678 "uuid": "aab95310-e2b2-50b7-95bd-d0ee7d03a89d", 00:09:53.678 "is_configured": true, 00:09:53.678 "data_offset": 2048, 00:09:53.678 "data_size": 63488 00:09:53.678 }, 00:09:53.678 { 00:09:53.678 "name": "BaseBdev2", 00:09:53.678 "uuid": "66abfe90-a449-5859-940a-fdc3ae66aba6", 00:09:53.678 "is_configured": true, 00:09:53.678 "data_offset": 2048, 00:09:53.678 "data_size": 63488 00:09:53.678 } 00:09:53.678 ] 00:09:53.678 }' 00:09:53.678 21:36:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:53.678 21:36:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.937 21:36:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:53.937 21:36:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:54.195 [2024-12-10 21:36:54.752720] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:09:55.131 21:36:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:09:55.131 21:36:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.131 21:36:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.131 21:36:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.131 21:36:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:55.131 21:36:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:09:55.131 21:36:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:09:55.131 21:36:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:09:55.131 21:36:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:55.131 21:36:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:55.131 21:36:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:55.131 21:36:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:55.131 21:36:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:55.131 21:36:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:55.131 21:36:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:55.131 21:36:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:55.131 21:36:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:55.131 21:36:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:55.131 21:36:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:55.131 21:36:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:55.131 21:36:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.131 21:36:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.131 21:36:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.131 21:36:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:55.131 "name": "raid_bdev1", 00:09:55.131 "uuid": "39d96729-ba45-4510-8ee3-a8403bc021e6", 00:09:55.131 "strip_size_kb": 0, 00:09:55.131 "state": "online", 00:09:55.131 "raid_level": "raid1", 00:09:55.131 "superblock": true, 00:09:55.131 "num_base_bdevs": 2, 00:09:55.131 "num_base_bdevs_discovered": 2, 00:09:55.131 "num_base_bdevs_operational": 2, 00:09:55.131 "base_bdevs_list": [ 00:09:55.131 { 00:09:55.131 "name": "BaseBdev1", 00:09:55.131 "uuid": "aab95310-e2b2-50b7-95bd-d0ee7d03a89d", 00:09:55.131 "is_configured": true, 00:09:55.131 "data_offset": 2048, 00:09:55.131 "data_size": 63488 00:09:55.131 }, 00:09:55.131 { 00:09:55.131 "name": "BaseBdev2", 00:09:55.131 "uuid": "66abfe90-a449-5859-940a-fdc3ae66aba6", 00:09:55.131 "is_configured": true, 00:09:55.131 "data_offset": 2048, 00:09:55.131 "data_size": 63488 00:09:55.131 } 00:09:55.131 ] 00:09:55.131 }' 00:09:55.131 21:36:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:55.131 21:36:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.391 21:36:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:55.391 21:36:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.391 21:36:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.391 [2024-12-10 21:36:56.141551] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:55.391 [2024-12-10 21:36:56.141590] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:55.391 [2024-12-10 21:36:56.144638] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:55.391 [2024-12-10 21:36:56.144686] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:55.391 [2024-12-10 21:36:56.144769] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:55.391 [2024-12-10 21:36:56.144782] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:55.391 { 00:09:55.391 "results": [ 00:09:55.391 { 00:09:55.391 "job": "raid_bdev1", 00:09:55.391 "core_mask": "0x1", 00:09:55.391 "workload": "randrw", 00:09:55.391 "percentage": 50, 00:09:55.391 "status": "finished", 00:09:55.391 "queue_depth": 1, 00:09:55.391 "io_size": 131072, 00:09:55.391 "runtime": 1.389333, 00:09:55.391 "iops": 16019.197701343019, 00:09:55.391 "mibps": 2002.3997126678773, 00:09:55.391 "io_failed": 0, 00:09:55.391 "io_timeout": 0, 00:09:55.391 "avg_latency_us": 59.40083945764882, 00:09:55.391 "min_latency_us": 24.593886462882097, 00:09:55.391 "max_latency_us": 1430.9170305676855 00:09:55.391 } 00:09:55.391 ], 00:09:55.391 "core_count": 1 00:09:55.391 } 00:09:55.391 21:36:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.391 21:36:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 63630 00:09:55.391 21:36:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 63630 ']' 00:09:55.391 21:36:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 63630 00:09:55.391 21:36:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:09:55.391 21:36:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:55.391 21:36:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63630 00:09:55.650 killing process with pid 63630 00:09:55.650 21:36:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:55.650 21:36:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:55.650 21:36:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63630' 00:09:55.650 21:36:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 63630 00:09:55.650 [2024-12-10 21:36:56.187469] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:55.650 21:36:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 63630 00:09:55.650 [2024-12-10 21:36:56.337075] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:57.101 21:36:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:57.101 21:36:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.SSpfvPhWbm 00:09:57.101 21:36:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:57.101 21:36:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:09:57.101 21:36:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:09:57.101 21:36:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:57.101 21:36:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:09:57.101 21:36:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:09:57.101 00:09:57.101 real 0m4.504s 00:09:57.101 user 0m5.372s 00:09:57.101 sys 0m0.556s 00:09:57.101 21:36:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:57.101 ************************************ 00:09:57.101 END TEST raid_read_error_test 00:09:57.101 ************************************ 00:09:57.101 21:36:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.101 21:36:57 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 2 write 00:09:57.101 21:36:57 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:57.101 21:36:57 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:57.101 21:36:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:57.101 ************************************ 00:09:57.101 START TEST raid_write_error_test 00:09:57.101 ************************************ 00:09:57.101 21:36:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 2 write 00:09:57.101 21:36:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:09:57.101 21:36:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:09:57.101 21:36:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:09:57.101 21:36:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:57.101 21:36:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:57.101 21:36:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:57.101 21:36:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:57.101 21:36:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:57.101 21:36:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:57.101 21:36:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:57.101 21:36:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:57.101 21:36:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:09:57.101 21:36:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:57.101 21:36:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:57.101 21:36:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:57.101 21:36:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:57.101 21:36:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:57.101 21:36:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:57.101 21:36:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:09:57.101 21:36:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:09:57.101 21:36:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:57.101 21:36:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.22mTI7cX0Z 00:09:57.101 21:36:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=63770 00:09:57.101 21:36:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:57.101 21:36:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 63770 00:09:57.101 21:36:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 63770 ']' 00:09:57.101 21:36:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:57.101 21:36:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:57.101 21:36:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:57.101 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:57.101 21:36:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:57.101 21:36:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.101 [2024-12-10 21:36:57.811777] Starting SPDK v25.01-pre git sha1 cec5ba284 / DPDK 24.03.0 initialization... 00:09:57.101 [2024-12-10 21:36:57.812239] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63770 ] 00:09:57.361 [2024-12-10 21:36:57.990485] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:57.361 [2024-12-10 21:36:58.115346] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:57.620 [2024-12-10 21:36:58.329271] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:57.620 [2024-12-10 21:36:58.329332] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:58.189 21:36:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:58.189 21:36:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:09:58.189 21:36:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:58.189 21:36:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:58.189 21:36:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.189 21:36:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.189 BaseBdev1_malloc 00:09:58.189 21:36:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.189 21:36:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:58.189 21:36:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.189 21:36:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.189 true 00:09:58.189 21:36:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.189 21:36:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:58.189 21:36:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.189 21:36:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.189 [2024-12-10 21:36:58.799608] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:58.189 [2024-12-10 21:36:58.799687] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:58.189 [2024-12-10 21:36:58.799710] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:58.189 [2024-12-10 21:36:58.799722] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:58.189 [2024-12-10 21:36:58.802137] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:58.189 [2024-12-10 21:36:58.802183] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:58.189 BaseBdev1 00:09:58.189 21:36:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.189 21:36:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:58.189 21:36:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:58.189 21:36:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.189 21:36:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.189 BaseBdev2_malloc 00:09:58.189 21:36:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.189 21:36:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:58.189 21:36:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.189 21:36:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.189 true 00:09:58.189 21:36:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.189 21:36:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:58.189 21:36:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.189 21:36:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.189 [2024-12-10 21:36:58.869887] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:58.189 [2024-12-10 21:36:58.869962] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:58.189 [2024-12-10 21:36:58.869983] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:58.189 [2024-12-10 21:36:58.869995] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:58.189 [2024-12-10 21:36:58.872425] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:58.189 [2024-12-10 21:36:58.872482] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:58.189 BaseBdev2 00:09:58.189 21:36:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.189 21:36:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:09:58.189 21:36:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.189 21:36:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.189 [2024-12-10 21:36:58.881934] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:58.189 [2024-12-10 21:36:58.884233] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:58.189 [2024-12-10 21:36:58.884578] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:58.190 [2024-12-10 21:36:58.884645] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:58.190 [2024-12-10 21:36:58.885031] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:58.190 [2024-12-10 21:36:58.885322] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:58.190 [2024-12-10 21:36:58.885372] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:58.190 [2024-12-10 21:36:58.885640] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:58.190 21:36:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.190 21:36:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:58.190 21:36:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:58.190 21:36:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:58.190 21:36:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:58.190 21:36:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:58.190 21:36:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:58.190 21:36:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:58.190 21:36:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:58.190 21:36:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:58.190 21:36:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:58.190 21:36:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:58.190 21:36:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:58.190 21:36:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.190 21:36:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.190 21:36:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.190 21:36:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:58.190 "name": "raid_bdev1", 00:09:58.190 "uuid": "ee17a928-6d22-458f-9f2e-797508ab715f", 00:09:58.190 "strip_size_kb": 0, 00:09:58.190 "state": "online", 00:09:58.190 "raid_level": "raid1", 00:09:58.190 "superblock": true, 00:09:58.190 "num_base_bdevs": 2, 00:09:58.190 "num_base_bdevs_discovered": 2, 00:09:58.190 "num_base_bdevs_operational": 2, 00:09:58.190 "base_bdevs_list": [ 00:09:58.190 { 00:09:58.190 "name": "BaseBdev1", 00:09:58.190 "uuid": "56f24117-4cef-58d9-8afc-9c8d8950da50", 00:09:58.190 "is_configured": true, 00:09:58.190 "data_offset": 2048, 00:09:58.190 "data_size": 63488 00:09:58.190 }, 00:09:58.190 { 00:09:58.190 "name": "BaseBdev2", 00:09:58.190 "uuid": "147d46da-a66a-5437-88f4-456617280906", 00:09:58.190 "is_configured": true, 00:09:58.190 "data_offset": 2048, 00:09:58.190 "data_size": 63488 00:09:58.190 } 00:09:58.190 ] 00:09:58.190 }' 00:09:58.190 21:36:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:58.190 21:36:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.757 21:36:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:58.757 21:36:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:58.757 [2024-12-10 21:36:59.386478] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:09:59.694 21:37:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:09:59.694 21:37:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.694 21:37:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.694 [2024-12-10 21:37:00.295109] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:09:59.694 [2024-12-10 21:37:00.295176] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:59.694 [2024-12-10 21:37:00.295375] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d0000063c0 00:09:59.694 21:37:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.694 21:37:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:59.694 21:37:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:09:59.694 21:37:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:09:59.694 21:37:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=1 00:09:59.694 21:37:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:09:59.694 21:37:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:59.694 21:37:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:59.694 21:37:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:59.694 21:37:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:59.695 21:37:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:09:59.695 21:37:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:59.695 21:37:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:59.695 21:37:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:59.695 21:37:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:59.695 21:37:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:59.695 21:37:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:59.695 21:37:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.695 21:37:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.695 21:37:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.695 21:37:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:59.695 "name": "raid_bdev1", 00:09:59.695 "uuid": "ee17a928-6d22-458f-9f2e-797508ab715f", 00:09:59.695 "strip_size_kb": 0, 00:09:59.695 "state": "online", 00:09:59.695 "raid_level": "raid1", 00:09:59.695 "superblock": true, 00:09:59.695 "num_base_bdevs": 2, 00:09:59.695 "num_base_bdevs_discovered": 1, 00:09:59.695 "num_base_bdevs_operational": 1, 00:09:59.695 "base_bdevs_list": [ 00:09:59.695 { 00:09:59.695 "name": null, 00:09:59.695 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:59.695 "is_configured": false, 00:09:59.695 "data_offset": 0, 00:09:59.695 "data_size": 63488 00:09:59.695 }, 00:09:59.695 { 00:09:59.695 "name": "BaseBdev2", 00:09:59.695 "uuid": "147d46da-a66a-5437-88f4-456617280906", 00:09:59.695 "is_configured": true, 00:09:59.695 "data_offset": 2048, 00:09:59.695 "data_size": 63488 00:09:59.695 } 00:09:59.695 ] 00:09:59.695 }' 00:09:59.695 21:37:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:59.695 21:37:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.265 21:37:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:00.265 21:37:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.265 21:37:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.265 [2024-12-10 21:37:00.793855] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:00.265 [2024-12-10 21:37:00.793973] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:00.265 [2024-12-10 21:37:00.797230] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:00.265 [2024-12-10 21:37:00.797320] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:00.265 [2024-12-10 21:37:00.797409] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:00.265 [2024-12-10 21:37:00.797489] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:10:00.265 { 00:10:00.265 "results": [ 00:10:00.265 { 00:10:00.265 "job": "raid_bdev1", 00:10:00.265 "core_mask": "0x1", 00:10:00.265 "workload": "randrw", 00:10:00.265 "percentage": 50, 00:10:00.265 "status": "finished", 00:10:00.265 "queue_depth": 1, 00:10:00.265 "io_size": 131072, 00:10:00.265 "runtime": 1.408338, 00:10:00.265 "iops": 19253.19064031504, 00:10:00.265 "mibps": 2406.64883003938, 00:10:00.265 "io_failed": 0, 00:10:00.265 "io_timeout": 0, 00:10:00.265 "avg_latency_us": 49.0159418359615, 00:10:00.265 "min_latency_us": 23.811353711790392, 00:10:00.265 "max_latency_us": 1595.4724890829693 00:10:00.265 } 00:10:00.265 ], 00:10:00.265 "core_count": 1 00:10:00.265 } 00:10:00.265 21:37:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.265 21:37:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 63770 00:10:00.265 21:37:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 63770 ']' 00:10:00.265 21:37:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 63770 00:10:00.265 21:37:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:10:00.265 21:37:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:00.265 21:37:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63770 00:10:00.265 21:37:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:00.265 21:37:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:00.265 killing process with pid 63770 00:10:00.265 21:37:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63770' 00:10:00.265 21:37:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 63770 00:10:00.265 [2024-12-10 21:37:00.846270] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:00.265 21:37:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 63770 00:10:00.265 [2024-12-10 21:37:00.996635] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:01.761 21:37:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.22mTI7cX0Z 00:10:01.761 21:37:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:01.761 21:37:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:01.761 ************************************ 00:10:01.761 END TEST raid_write_error_test 00:10:01.761 ************************************ 00:10:01.761 21:37:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:10:01.761 21:37:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:10:01.761 21:37:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:01.761 21:37:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:01.761 21:37:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:10:01.761 00:10:01.761 real 0m4.617s 00:10:01.761 user 0m5.542s 00:10:01.761 sys 0m0.576s 00:10:01.761 21:37:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:01.761 21:37:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.761 21:37:02 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:10:01.761 21:37:02 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:10:01.761 21:37:02 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 3 false 00:10:01.761 21:37:02 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:01.761 21:37:02 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:01.761 21:37:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:01.761 ************************************ 00:10:01.761 START TEST raid_state_function_test 00:10:01.761 ************************************ 00:10:01.761 21:37:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 3 false 00:10:01.761 21:37:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:10:01.761 21:37:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:10:01.761 21:37:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:10:01.761 21:37:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:01.761 21:37:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:01.761 21:37:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:01.761 21:37:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:01.761 21:37:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:01.761 21:37:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:01.761 21:37:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:01.761 21:37:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:01.761 21:37:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:01.761 21:37:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:01.761 21:37:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:01.761 21:37:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:01.761 21:37:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:01.761 21:37:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:01.761 21:37:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:01.761 21:37:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:01.761 21:37:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:01.761 21:37:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:01.761 21:37:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:10:01.761 21:37:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:01.761 21:37:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:01.761 21:37:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:10:01.761 21:37:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:10:01.761 21:37:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=63919 00:10:01.761 21:37:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:01.761 21:37:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 63919' 00:10:01.761 Process raid pid: 63919 00:10:01.761 21:37:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 63919 00:10:01.761 21:37:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 63919 ']' 00:10:01.761 21:37:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:01.761 21:37:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:01.761 21:37:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:01.761 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:01.761 21:37:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:01.761 21:37:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.761 [2024-12-10 21:37:02.492991] Starting SPDK v25.01-pre git sha1 cec5ba284 / DPDK 24.03.0 initialization... 00:10:01.761 [2024-12-10 21:37:02.493848] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:02.020 [2024-12-10 21:37:02.691546] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:02.280 [2024-12-10 21:37:02.811530] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:10:02.280 [2024-12-10 21:37:03.025061] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:02.280 [2024-12-10 21:37:03.025107] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:02.850 21:37:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:02.850 21:37:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:10:02.850 21:37:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:02.850 21:37:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.850 21:37:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.850 [2024-12-10 21:37:03.351983] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:02.850 [2024-12-10 21:37:03.352045] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:02.850 [2024-12-10 21:37:03.352057] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:02.850 [2024-12-10 21:37:03.352068] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:02.850 [2024-12-10 21:37:03.352075] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:02.850 [2024-12-10 21:37:03.352085] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:02.850 21:37:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.850 21:37:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:02.850 21:37:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:02.850 21:37:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:02.850 21:37:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:02.850 21:37:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:02.850 21:37:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:02.850 21:37:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:02.850 21:37:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:02.850 21:37:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:02.850 21:37:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:02.850 21:37:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:02.850 21:37:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:02.850 21:37:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.850 21:37:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.850 21:37:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.850 21:37:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:02.850 "name": "Existed_Raid", 00:10:02.850 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:02.850 "strip_size_kb": 64, 00:10:02.850 "state": "configuring", 00:10:02.850 "raid_level": "raid0", 00:10:02.850 "superblock": false, 00:10:02.850 "num_base_bdevs": 3, 00:10:02.850 "num_base_bdevs_discovered": 0, 00:10:02.850 "num_base_bdevs_operational": 3, 00:10:02.850 "base_bdevs_list": [ 00:10:02.850 { 00:10:02.850 "name": "BaseBdev1", 00:10:02.850 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:02.850 "is_configured": false, 00:10:02.850 "data_offset": 0, 00:10:02.850 "data_size": 0 00:10:02.850 }, 00:10:02.850 { 00:10:02.850 "name": "BaseBdev2", 00:10:02.850 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:02.850 "is_configured": false, 00:10:02.850 "data_offset": 0, 00:10:02.850 "data_size": 0 00:10:02.850 }, 00:10:02.850 { 00:10:02.850 "name": "BaseBdev3", 00:10:02.850 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:02.850 "is_configured": false, 00:10:02.850 "data_offset": 0, 00:10:02.850 "data_size": 0 00:10:02.850 } 00:10:02.850 ] 00:10:02.850 }' 00:10:02.850 21:37:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:02.850 21:37:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.110 21:37:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:03.110 21:37:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.110 21:37:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.110 [2024-12-10 21:37:03.831146] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:03.110 [2024-12-10 21:37:03.831250] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:03.110 21:37:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.110 21:37:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:03.110 21:37:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.110 21:37:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.110 [2024-12-10 21:37:03.843144] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:03.110 [2024-12-10 21:37:03.843190] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:03.110 [2024-12-10 21:37:03.843200] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:03.110 [2024-12-10 21:37:03.843209] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:03.110 [2024-12-10 21:37:03.843215] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:03.110 [2024-12-10 21:37:03.843224] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:03.110 21:37:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.110 21:37:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:03.110 21:37:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.110 21:37:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.110 [2024-12-10 21:37:03.889615] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:03.370 BaseBdev1 00:10:03.370 21:37:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.370 21:37:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:03.370 21:37:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:03.370 21:37:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:03.370 21:37:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:03.370 21:37:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:03.370 21:37:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:03.370 21:37:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:03.370 21:37:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.370 21:37:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.370 21:37:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.370 21:37:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:03.370 21:37:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.370 21:37:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.370 [ 00:10:03.370 { 00:10:03.370 "name": "BaseBdev1", 00:10:03.370 "aliases": [ 00:10:03.370 "7dba18e6-ec2d-41fe-afc7-bedf3fe28f91" 00:10:03.370 ], 00:10:03.370 "product_name": "Malloc disk", 00:10:03.370 "block_size": 512, 00:10:03.370 "num_blocks": 65536, 00:10:03.370 "uuid": "7dba18e6-ec2d-41fe-afc7-bedf3fe28f91", 00:10:03.370 "assigned_rate_limits": { 00:10:03.370 "rw_ios_per_sec": 0, 00:10:03.370 "rw_mbytes_per_sec": 0, 00:10:03.370 "r_mbytes_per_sec": 0, 00:10:03.370 "w_mbytes_per_sec": 0 00:10:03.370 }, 00:10:03.370 "claimed": true, 00:10:03.370 "claim_type": "exclusive_write", 00:10:03.370 "zoned": false, 00:10:03.370 "supported_io_types": { 00:10:03.370 "read": true, 00:10:03.370 "write": true, 00:10:03.370 "unmap": true, 00:10:03.370 "flush": true, 00:10:03.370 "reset": true, 00:10:03.370 "nvme_admin": false, 00:10:03.370 "nvme_io": false, 00:10:03.370 "nvme_io_md": false, 00:10:03.370 "write_zeroes": true, 00:10:03.370 "zcopy": true, 00:10:03.370 "get_zone_info": false, 00:10:03.371 "zone_management": false, 00:10:03.371 "zone_append": false, 00:10:03.371 "compare": false, 00:10:03.371 "compare_and_write": false, 00:10:03.371 "abort": true, 00:10:03.371 "seek_hole": false, 00:10:03.371 "seek_data": false, 00:10:03.371 "copy": true, 00:10:03.371 "nvme_iov_md": false 00:10:03.371 }, 00:10:03.371 "memory_domains": [ 00:10:03.371 { 00:10:03.371 "dma_device_id": "system", 00:10:03.371 "dma_device_type": 1 00:10:03.371 }, 00:10:03.371 { 00:10:03.371 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:03.371 "dma_device_type": 2 00:10:03.371 } 00:10:03.371 ], 00:10:03.371 "driver_specific": {} 00:10:03.371 } 00:10:03.371 ] 00:10:03.371 21:37:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.371 21:37:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:03.371 21:37:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:03.371 21:37:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:03.371 21:37:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:03.371 21:37:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:03.371 21:37:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:03.371 21:37:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:03.371 21:37:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:03.371 21:37:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:03.371 21:37:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:03.371 21:37:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:03.371 21:37:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:03.371 21:37:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:03.371 21:37:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.371 21:37:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.371 21:37:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.371 21:37:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:03.371 "name": "Existed_Raid", 00:10:03.371 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:03.371 "strip_size_kb": 64, 00:10:03.371 "state": "configuring", 00:10:03.371 "raid_level": "raid0", 00:10:03.371 "superblock": false, 00:10:03.371 "num_base_bdevs": 3, 00:10:03.371 "num_base_bdevs_discovered": 1, 00:10:03.371 "num_base_bdevs_operational": 3, 00:10:03.371 "base_bdevs_list": [ 00:10:03.371 { 00:10:03.371 "name": "BaseBdev1", 00:10:03.371 "uuid": "7dba18e6-ec2d-41fe-afc7-bedf3fe28f91", 00:10:03.371 "is_configured": true, 00:10:03.371 "data_offset": 0, 00:10:03.371 "data_size": 65536 00:10:03.371 }, 00:10:03.371 { 00:10:03.371 "name": "BaseBdev2", 00:10:03.371 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:03.371 "is_configured": false, 00:10:03.371 "data_offset": 0, 00:10:03.371 "data_size": 0 00:10:03.371 }, 00:10:03.371 { 00:10:03.371 "name": "BaseBdev3", 00:10:03.371 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:03.371 "is_configured": false, 00:10:03.371 "data_offset": 0, 00:10:03.371 "data_size": 0 00:10:03.371 } 00:10:03.371 ] 00:10:03.371 }' 00:10:03.371 21:37:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:03.371 21:37:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.631 21:37:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:03.631 21:37:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.631 21:37:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.631 [2024-12-10 21:37:04.341000] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:03.631 [2024-12-10 21:37:04.341068] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:03.631 21:37:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.631 21:37:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:03.631 21:37:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.631 21:37:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.631 [2024-12-10 21:37:04.353088] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:03.631 [2024-12-10 21:37:04.355638] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:03.631 [2024-12-10 21:37:04.355692] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:03.631 [2024-12-10 21:37:04.355703] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:03.631 [2024-12-10 21:37:04.355730] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:03.631 21:37:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.631 21:37:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:03.631 21:37:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:03.631 21:37:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:03.631 21:37:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:03.631 21:37:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:03.631 21:37:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:03.631 21:37:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:03.631 21:37:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:03.631 21:37:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:03.631 21:37:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:03.631 21:37:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:03.631 21:37:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:03.631 21:37:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:03.631 21:37:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:03.631 21:37:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.631 21:37:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.631 21:37:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.631 21:37:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:03.631 "name": "Existed_Raid", 00:10:03.631 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:03.631 "strip_size_kb": 64, 00:10:03.631 "state": "configuring", 00:10:03.631 "raid_level": "raid0", 00:10:03.631 "superblock": false, 00:10:03.631 "num_base_bdevs": 3, 00:10:03.631 "num_base_bdevs_discovered": 1, 00:10:03.631 "num_base_bdevs_operational": 3, 00:10:03.631 "base_bdevs_list": [ 00:10:03.631 { 00:10:03.631 "name": "BaseBdev1", 00:10:03.631 "uuid": "7dba18e6-ec2d-41fe-afc7-bedf3fe28f91", 00:10:03.631 "is_configured": true, 00:10:03.631 "data_offset": 0, 00:10:03.631 "data_size": 65536 00:10:03.631 }, 00:10:03.631 { 00:10:03.631 "name": "BaseBdev2", 00:10:03.631 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:03.631 "is_configured": false, 00:10:03.631 "data_offset": 0, 00:10:03.631 "data_size": 0 00:10:03.631 }, 00:10:03.631 { 00:10:03.631 "name": "BaseBdev3", 00:10:03.631 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:03.631 "is_configured": false, 00:10:03.631 "data_offset": 0, 00:10:03.631 "data_size": 0 00:10:03.631 } 00:10:03.631 ] 00:10:03.631 }' 00:10:03.631 21:37:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:03.631 21:37:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.198 21:37:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:04.198 21:37:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.198 21:37:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.198 [2024-12-10 21:37:04.837355] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:04.198 BaseBdev2 00:10:04.198 21:37:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.198 21:37:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:04.198 21:37:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:04.198 21:37:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:04.198 21:37:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:04.198 21:37:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:04.198 21:37:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:04.198 21:37:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:04.198 21:37:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.199 21:37:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.199 21:37:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.199 21:37:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:04.199 21:37:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.199 21:37:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.199 [ 00:10:04.199 { 00:10:04.199 "name": "BaseBdev2", 00:10:04.199 "aliases": [ 00:10:04.199 "674db6ef-2906-4ef0-a2d8-27af468b6de6" 00:10:04.199 ], 00:10:04.199 "product_name": "Malloc disk", 00:10:04.199 "block_size": 512, 00:10:04.199 "num_blocks": 65536, 00:10:04.199 "uuid": "674db6ef-2906-4ef0-a2d8-27af468b6de6", 00:10:04.199 "assigned_rate_limits": { 00:10:04.199 "rw_ios_per_sec": 0, 00:10:04.199 "rw_mbytes_per_sec": 0, 00:10:04.199 "r_mbytes_per_sec": 0, 00:10:04.199 "w_mbytes_per_sec": 0 00:10:04.199 }, 00:10:04.199 "claimed": true, 00:10:04.199 "claim_type": "exclusive_write", 00:10:04.199 "zoned": false, 00:10:04.199 "supported_io_types": { 00:10:04.199 "read": true, 00:10:04.199 "write": true, 00:10:04.199 "unmap": true, 00:10:04.199 "flush": true, 00:10:04.199 "reset": true, 00:10:04.199 "nvme_admin": false, 00:10:04.199 "nvme_io": false, 00:10:04.199 "nvme_io_md": false, 00:10:04.199 "write_zeroes": true, 00:10:04.199 "zcopy": true, 00:10:04.199 "get_zone_info": false, 00:10:04.199 "zone_management": false, 00:10:04.199 "zone_append": false, 00:10:04.199 "compare": false, 00:10:04.199 "compare_and_write": false, 00:10:04.199 "abort": true, 00:10:04.199 "seek_hole": false, 00:10:04.199 "seek_data": false, 00:10:04.199 "copy": true, 00:10:04.199 "nvme_iov_md": false 00:10:04.199 }, 00:10:04.199 "memory_domains": [ 00:10:04.199 { 00:10:04.199 "dma_device_id": "system", 00:10:04.199 "dma_device_type": 1 00:10:04.199 }, 00:10:04.199 { 00:10:04.199 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:04.199 "dma_device_type": 2 00:10:04.199 } 00:10:04.199 ], 00:10:04.199 "driver_specific": {} 00:10:04.199 } 00:10:04.199 ] 00:10:04.199 21:37:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.199 21:37:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:04.199 21:37:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:04.199 21:37:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:04.199 21:37:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:04.199 21:37:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:04.199 21:37:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:04.199 21:37:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:04.199 21:37:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:04.199 21:37:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:04.199 21:37:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:04.199 21:37:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:04.199 21:37:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:04.199 21:37:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:04.199 21:37:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:04.199 21:37:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:04.199 21:37:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.199 21:37:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.199 21:37:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.199 21:37:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:04.199 "name": "Existed_Raid", 00:10:04.199 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:04.199 "strip_size_kb": 64, 00:10:04.199 "state": "configuring", 00:10:04.199 "raid_level": "raid0", 00:10:04.199 "superblock": false, 00:10:04.199 "num_base_bdevs": 3, 00:10:04.199 "num_base_bdevs_discovered": 2, 00:10:04.199 "num_base_bdevs_operational": 3, 00:10:04.199 "base_bdevs_list": [ 00:10:04.199 { 00:10:04.199 "name": "BaseBdev1", 00:10:04.199 "uuid": "7dba18e6-ec2d-41fe-afc7-bedf3fe28f91", 00:10:04.199 "is_configured": true, 00:10:04.199 "data_offset": 0, 00:10:04.199 "data_size": 65536 00:10:04.199 }, 00:10:04.199 { 00:10:04.199 "name": "BaseBdev2", 00:10:04.199 "uuid": "674db6ef-2906-4ef0-a2d8-27af468b6de6", 00:10:04.199 "is_configured": true, 00:10:04.199 "data_offset": 0, 00:10:04.199 "data_size": 65536 00:10:04.199 }, 00:10:04.199 { 00:10:04.199 "name": "BaseBdev3", 00:10:04.199 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:04.199 "is_configured": false, 00:10:04.199 "data_offset": 0, 00:10:04.199 "data_size": 0 00:10:04.199 } 00:10:04.199 ] 00:10:04.199 }' 00:10:04.199 21:37:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:04.199 21:37:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.770 21:37:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:04.770 21:37:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.770 21:37:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.770 [2024-12-10 21:37:05.350778] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:04.770 [2024-12-10 21:37:05.350925] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:04.770 [2024-12-10 21:37:05.350964] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:10:04.770 [2024-12-10 21:37:05.351318] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:04.770 [2024-12-10 21:37:05.351652] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:04.770 [2024-12-10 21:37:05.351732] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:04.770 [2024-12-10 21:37:05.352203] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:04.770 BaseBdev3 00:10:04.770 21:37:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.770 21:37:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:04.770 21:37:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:04.770 21:37:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:04.770 21:37:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:04.770 21:37:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:04.770 21:37:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:04.770 21:37:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:04.771 21:37:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.771 21:37:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.771 21:37:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.771 21:37:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:04.771 21:37:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.771 21:37:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.771 [ 00:10:04.771 { 00:10:04.771 "name": "BaseBdev3", 00:10:04.771 "aliases": [ 00:10:04.771 "360cde92-d98e-495f-979d-6b87ec2ccf3b" 00:10:04.771 ], 00:10:04.771 "product_name": "Malloc disk", 00:10:04.771 "block_size": 512, 00:10:04.771 "num_blocks": 65536, 00:10:04.771 "uuid": "360cde92-d98e-495f-979d-6b87ec2ccf3b", 00:10:04.771 "assigned_rate_limits": { 00:10:04.771 "rw_ios_per_sec": 0, 00:10:04.771 "rw_mbytes_per_sec": 0, 00:10:04.771 "r_mbytes_per_sec": 0, 00:10:04.771 "w_mbytes_per_sec": 0 00:10:04.771 }, 00:10:04.771 "claimed": true, 00:10:04.771 "claim_type": "exclusive_write", 00:10:04.771 "zoned": false, 00:10:04.771 "supported_io_types": { 00:10:04.771 "read": true, 00:10:04.771 "write": true, 00:10:04.771 "unmap": true, 00:10:04.771 "flush": true, 00:10:04.771 "reset": true, 00:10:04.771 "nvme_admin": false, 00:10:04.771 "nvme_io": false, 00:10:04.771 "nvme_io_md": false, 00:10:04.771 "write_zeroes": true, 00:10:04.771 "zcopy": true, 00:10:04.771 "get_zone_info": false, 00:10:04.771 "zone_management": false, 00:10:04.771 "zone_append": false, 00:10:04.771 "compare": false, 00:10:04.771 "compare_and_write": false, 00:10:04.771 "abort": true, 00:10:04.771 "seek_hole": false, 00:10:04.771 "seek_data": false, 00:10:04.771 "copy": true, 00:10:04.771 "nvme_iov_md": false 00:10:04.771 }, 00:10:04.771 "memory_domains": [ 00:10:04.771 { 00:10:04.771 "dma_device_id": "system", 00:10:04.771 "dma_device_type": 1 00:10:04.771 }, 00:10:04.771 { 00:10:04.771 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:04.771 "dma_device_type": 2 00:10:04.771 } 00:10:04.771 ], 00:10:04.771 "driver_specific": {} 00:10:04.771 } 00:10:04.771 ] 00:10:04.771 21:37:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.771 21:37:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:04.771 21:37:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:04.771 21:37:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:04.771 21:37:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:10:04.771 21:37:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:04.771 21:37:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:04.771 21:37:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:04.771 21:37:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:04.771 21:37:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:04.771 21:37:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:04.771 21:37:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:04.771 21:37:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:04.771 21:37:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:04.771 21:37:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:04.771 21:37:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:04.771 21:37:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.771 21:37:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.771 21:37:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.771 21:37:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:04.771 "name": "Existed_Raid", 00:10:04.771 "uuid": "2c69828d-e426-4902-ab6d-8a6b4b9cb457", 00:10:04.771 "strip_size_kb": 64, 00:10:04.771 "state": "online", 00:10:04.771 "raid_level": "raid0", 00:10:04.771 "superblock": false, 00:10:04.771 "num_base_bdevs": 3, 00:10:04.771 "num_base_bdevs_discovered": 3, 00:10:04.771 "num_base_bdevs_operational": 3, 00:10:04.771 "base_bdevs_list": [ 00:10:04.771 { 00:10:04.771 "name": "BaseBdev1", 00:10:04.771 "uuid": "7dba18e6-ec2d-41fe-afc7-bedf3fe28f91", 00:10:04.771 "is_configured": true, 00:10:04.771 "data_offset": 0, 00:10:04.771 "data_size": 65536 00:10:04.771 }, 00:10:04.771 { 00:10:04.771 "name": "BaseBdev2", 00:10:04.771 "uuid": "674db6ef-2906-4ef0-a2d8-27af468b6de6", 00:10:04.771 "is_configured": true, 00:10:04.771 "data_offset": 0, 00:10:04.771 "data_size": 65536 00:10:04.771 }, 00:10:04.771 { 00:10:04.771 "name": "BaseBdev3", 00:10:04.771 "uuid": "360cde92-d98e-495f-979d-6b87ec2ccf3b", 00:10:04.771 "is_configured": true, 00:10:04.771 "data_offset": 0, 00:10:04.771 "data_size": 65536 00:10:04.771 } 00:10:04.771 ] 00:10:04.771 }' 00:10:04.771 21:37:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:04.771 21:37:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.338 21:37:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:05.338 21:37:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:05.338 21:37:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:05.338 21:37:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:05.338 21:37:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:05.338 21:37:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:05.338 21:37:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:05.338 21:37:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:05.338 21:37:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.338 21:37:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.338 [2024-12-10 21:37:05.862328] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:05.338 21:37:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.338 21:37:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:05.338 "name": "Existed_Raid", 00:10:05.338 "aliases": [ 00:10:05.338 "2c69828d-e426-4902-ab6d-8a6b4b9cb457" 00:10:05.338 ], 00:10:05.338 "product_name": "Raid Volume", 00:10:05.338 "block_size": 512, 00:10:05.338 "num_blocks": 196608, 00:10:05.338 "uuid": "2c69828d-e426-4902-ab6d-8a6b4b9cb457", 00:10:05.338 "assigned_rate_limits": { 00:10:05.338 "rw_ios_per_sec": 0, 00:10:05.338 "rw_mbytes_per_sec": 0, 00:10:05.338 "r_mbytes_per_sec": 0, 00:10:05.338 "w_mbytes_per_sec": 0 00:10:05.338 }, 00:10:05.338 "claimed": false, 00:10:05.338 "zoned": false, 00:10:05.338 "supported_io_types": { 00:10:05.338 "read": true, 00:10:05.339 "write": true, 00:10:05.339 "unmap": true, 00:10:05.339 "flush": true, 00:10:05.339 "reset": true, 00:10:05.339 "nvme_admin": false, 00:10:05.339 "nvme_io": false, 00:10:05.339 "nvme_io_md": false, 00:10:05.339 "write_zeroes": true, 00:10:05.339 "zcopy": false, 00:10:05.339 "get_zone_info": false, 00:10:05.339 "zone_management": false, 00:10:05.339 "zone_append": false, 00:10:05.339 "compare": false, 00:10:05.339 "compare_and_write": false, 00:10:05.339 "abort": false, 00:10:05.339 "seek_hole": false, 00:10:05.339 "seek_data": false, 00:10:05.339 "copy": false, 00:10:05.339 "nvme_iov_md": false 00:10:05.339 }, 00:10:05.339 "memory_domains": [ 00:10:05.339 { 00:10:05.339 "dma_device_id": "system", 00:10:05.339 "dma_device_type": 1 00:10:05.339 }, 00:10:05.339 { 00:10:05.339 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:05.339 "dma_device_type": 2 00:10:05.339 }, 00:10:05.339 { 00:10:05.339 "dma_device_id": "system", 00:10:05.339 "dma_device_type": 1 00:10:05.339 }, 00:10:05.339 { 00:10:05.339 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:05.339 "dma_device_type": 2 00:10:05.339 }, 00:10:05.339 { 00:10:05.339 "dma_device_id": "system", 00:10:05.339 "dma_device_type": 1 00:10:05.339 }, 00:10:05.339 { 00:10:05.339 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:05.339 "dma_device_type": 2 00:10:05.339 } 00:10:05.339 ], 00:10:05.339 "driver_specific": { 00:10:05.339 "raid": { 00:10:05.339 "uuid": "2c69828d-e426-4902-ab6d-8a6b4b9cb457", 00:10:05.339 "strip_size_kb": 64, 00:10:05.339 "state": "online", 00:10:05.339 "raid_level": "raid0", 00:10:05.339 "superblock": false, 00:10:05.339 "num_base_bdevs": 3, 00:10:05.339 "num_base_bdevs_discovered": 3, 00:10:05.339 "num_base_bdevs_operational": 3, 00:10:05.339 "base_bdevs_list": [ 00:10:05.339 { 00:10:05.339 "name": "BaseBdev1", 00:10:05.339 "uuid": "7dba18e6-ec2d-41fe-afc7-bedf3fe28f91", 00:10:05.339 "is_configured": true, 00:10:05.339 "data_offset": 0, 00:10:05.339 "data_size": 65536 00:10:05.339 }, 00:10:05.339 { 00:10:05.339 "name": "BaseBdev2", 00:10:05.339 "uuid": "674db6ef-2906-4ef0-a2d8-27af468b6de6", 00:10:05.339 "is_configured": true, 00:10:05.339 "data_offset": 0, 00:10:05.339 "data_size": 65536 00:10:05.339 }, 00:10:05.339 { 00:10:05.339 "name": "BaseBdev3", 00:10:05.339 "uuid": "360cde92-d98e-495f-979d-6b87ec2ccf3b", 00:10:05.339 "is_configured": true, 00:10:05.339 "data_offset": 0, 00:10:05.339 "data_size": 65536 00:10:05.339 } 00:10:05.339 ] 00:10:05.339 } 00:10:05.339 } 00:10:05.339 }' 00:10:05.339 21:37:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:05.339 21:37:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:05.339 BaseBdev2 00:10:05.339 BaseBdev3' 00:10:05.339 21:37:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:05.339 21:37:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:05.339 21:37:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:05.339 21:37:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:05.339 21:37:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.339 21:37:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.339 21:37:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:05.339 21:37:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.339 21:37:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:05.339 21:37:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:05.339 21:37:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:05.339 21:37:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:05.339 21:37:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:05.339 21:37:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.339 21:37:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.339 21:37:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.339 21:37:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:05.339 21:37:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:05.339 21:37:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:05.339 21:37:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:05.339 21:37:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.339 21:37:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.339 21:37:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:05.339 21:37:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.339 21:37:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:05.339 21:37:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:05.597 21:37:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:05.597 21:37:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.597 21:37:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.597 [2024-12-10 21:37:06.125657] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:05.597 [2024-12-10 21:37:06.125691] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:05.597 [2024-12-10 21:37:06.125756] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:05.597 21:37:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.597 21:37:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:05.597 21:37:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:10:05.597 21:37:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:05.597 21:37:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:05.597 21:37:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:05.597 21:37:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:10:05.597 21:37:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:05.597 21:37:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:05.597 21:37:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:05.597 21:37:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:05.597 21:37:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:05.597 21:37:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:05.597 21:37:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:05.597 21:37:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:05.597 21:37:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:05.597 21:37:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:05.597 21:37:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.597 21:37:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.597 21:37:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:05.597 21:37:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.597 21:37:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:05.597 "name": "Existed_Raid", 00:10:05.597 "uuid": "2c69828d-e426-4902-ab6d-8a6b4b9cb457", 00:10:05.597 "strip_size_kb": 64, 00:10:05.597 "state": "offline", 00:10:05.597 "raid_level": "raid0", 00:10:05.597 "superblock": false, 00:10:05.597 "num_base_bdevs": 3, 00:10:05.597 "num_base_bdevs_discovered": 2, 00:10:05.597 "num_base_bdevs_operational": 2, 00:10:05.597 "base_bdevs_list": [ 00:10:05.597 { 00:10:05.597 "name": null, 00:10:05.597 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:05.597 "is_configured": false, 00:10:05.597 "data_offset": 0, 00:10:05.597 "data_size": 65536 00:10:05.598 }, 00:10:05.598 { 00:10:05.598 "name": "BaseBdev2", 00:10:05.598 "uuid": "674db6ef-2906-4ef0-a2d8-27af468b6de6", 00:10:05.598 "is_configured": true, 00:10:05.598 "data_offset": 0, 00:10:05.598 "data_size": 65536 00:10:05.598 }, 00:10:05.598 { 00:10:05.598 "name": "BaseBdev3", 00:10:05.598 "uuid": "360cde92-d98e-495f-979d-6b87ec2ccf3b", 00:10:05.598 "is_configured": true, 00:10:05.598 "data_offset": 0, 00:10:05.598 "data_size": 65536 00:10:05.598 } 00:10:05.598 ] 00:10:05.598 }' 00:10:05.598 21:37:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:05.598 21:37:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.164 21:37:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:06.164 21:37:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:06.164 21:37:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:06.164 21:37:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.164 21:37:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.164 21:37:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:06.164 21:37:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.164 21:37:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:06.164 21:37:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:06.164 21:37:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:06.164 21:37:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.164 21:37:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.164 [2024-12-10 21:37:06.772637] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:06.164 21:37:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.164 21:37:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:06.164 21:37:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:06.164 21:37:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:06.164 21:37:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:06.164 21:37:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.164 21:37:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.164 21:37:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.164 21:37:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:06.164 21:37:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:06.164 21:37:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:06.164 21:37:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.164 21:37:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.164 [2024-12-10 21:37:06.926964] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:06.164 [2024-12-10 21:37:06.927020] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:06.422 21:37:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.422 21:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:06.422 21:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:06.422 21:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:06.422 21:37:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.422 21:37:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.422 21:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:06.422 21:37:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.422 21:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:06.422 21:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:06.422 21:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:10:06.422 21:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:06.422 21:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:06.422 21:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:06.422 21:37:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.422 21:37:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.422 BaseBdev2 00:10:06.422 21:37:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.422 21:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:06.422 21:37:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:06.422 21:37:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:06.422 21:37:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:06.422 21:37:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:06.422 21:37:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:06.422 21:37:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:06.422 21:37:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.422 21:37:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.422 21:37:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.422 21:37:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:06.422 21:37:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.422 21:37:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.422 [ 00:10:06.422 { 00:10:06.422 "name": "BaseBdev2", 00:10:06.422 "aliases": [ 00:10:06.422 "14c1e83c-4b42-4473-871f-59b6e0dbc80c" 00:10:06.422 ], 00:10:06.422 "product_name": "Malloc disk", 00:10:06.422 "block_size": 512, 00:10:06.422 "num_blocks": 65536, 00:10:06.422 "uuid": "14c1e83c-4b42-4473-871f-59b6e0dbc80c", 00:10:06.422 "assigned_rate_limits": { 00:10:06.422 "rw_ios_per_sec": 0, 00:10:06.422 "rw_mbytes_per_sec": 0, 00:10:06.422 "r_mbytes_per_sec": 0, 00:10:06.422 "w_mbytes_per_sec": 0 00:10:06.422 }, 00:10:06.422 "claimed": false, 00:10:06.422 "zoned": false, 00:10:06.422 "supported_io_types": { 00:10:06.422 "read": true, 00:10:06.422 "write": true, 00:10:06.422 "unmap": true, 00:10:06.422 "flush": true, 00:10:06.422 "reset": true, 00:10:06.422 "nvme_admin": false, 00:10:06.422 "nvme_io": false, 00:10:06.422 "nvme_io_md": false, 00:10:06.422 "write_zeroes": true, 00:10:06.422 "zcopy": true, 00:10:06.422 "get_zone_info": false, 00:10:06.422 "zone_management": false, 00:10:06.422 "zone_append": false, 00:10:06.422 "compare": false, 00:10:06.422 "compare_and_write": false, 00:10:06.422 "abort": true, 00:10:06.422 "seek_hole": false, 00:10:06.422 "seek_data": false, 00:10:06.422 "copy": true, 00:10:06.422 "nvme_iov_md": false 00:10:06.422 }, 00:10:06.423 "memory_domains": [ 00:10:06.423 { 00:10:06.423 "dma_device_id": "system", 00:10:06.423 "dma_device_type": 1 00:10:06.423 }, 00:10:06.423 { 00:10:06.423 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:06.423 "dma_device_type": 2 00:10:06.423 } 00:10:06.423 ], 00:10:06.423 "driver_specific": {} 00:10:06.423 } 00:10:06.423 ] 00:10:06.423 21:37:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.423 21:37:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:06.423 21:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:06.423 21:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:06.423 21:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:06.423 21:37:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.423 21:37:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.681 BaseBdev3 00:10:06.681 21:37:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.681 21:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:06.681 21:37:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:06.681 21:37:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:06.681 21:37:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:06.681 21:37:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:06.681 21:37:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:06.681 21:37:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:06.681 21:37:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.681 21:37:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.681 21:37:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.681 21:37:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:06.681 21:37:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.681 21:37:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.681 [ 00:10:06.681 { 00:10:06.681 "name": "BaseBdev3", 00:10:06.681 "aliases": [ 00:10:06.681 "7afa9bd4-9a5d-4135-9783-fcf2316eba3e" 00:10:06.681 ], 00:10:06.681 "product_name": "Malloc disk", 00:10:06.681 "block_size": 512, 00:10:06.681 "num_blocks": 65536, 00:10:06.681 "uuid": "7afa9bd4-9a5d-4135-9783-fcf2316eba3e", 00:10:06.681 "assigned_rate_limits": { 00:10:06.681 "rw_ios_per_sec": 0, 00:10:06.681 "rw_mbytes_per_sec": 0, 00:10:06.681 "r_mbytes_per_sec": 0, 00:10:06.681 "w_mbytes_per_sec": 0 00:10:06.681 }, 00:10:06.681 "claimed": false, 00:10:06.681 "zoned": false, 00:10:06.681 "supported_io_types": { 00:10:06.681 "read": true, 00:10:06.681 "write": true, 00:10:06.681 "unmap": true, 00:10:06.681 "flush": true, 00:10:06.681 "reset": true, 00:10:06.681 "nvme_admin": false, 00:10:06.681 "nvme_io": false, 00:10:06.681 "nvme_io_md": false, 00:10:06.681 "write_zeroes": true, 00:10:06.681 "zcopy": true, 00:10:06.681 "get_zone_info": false, 00:10:06.681 "zone_management": false, 00:10:06.681 "zone_append": false, 00:10:06.681 "compare": false, 00:10:06.681 "compare_and_write": false, 00:10:06.681 "abort": true, 00:10:06.681 "seek_hole": false, 00:10:06.681 "seek_data": false, 00:10:06.681 "copy": true, 00:10:06.681 "nvme_iov_md": false 00:10:06.681 }, 00:10:06.681 "memory_domains": [ 00:10:06.681 { 00:10:06.681 "dma_device_id": "system", 00:10:06.681 "dma_device_type": 1 00:10:06.681 }, 00:10:06.681 { 00:10:06.681 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:06.681 "dma_device_type": 2 00:10:06.681 } 00:10:06.681 ], 00:10:06.681 "driver_specific": {} 00:10:06.681 } 00:10:06.681 ] 00:10:06.681 21:37:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.681 21:37:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:06.681 21:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:06.681 21:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:06.681 21:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:06.681 21:37:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.681 21:37:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.681 [2024-12-10 21:37:07.250157] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:06.681 [2024-12-10 21:37:07.250294] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:06.681 [2024-12-10 21:37:07.250355] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:06.681 [2024-12-10 21:37:07.252560] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:06.681 21:37:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.681 21:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:06.681 21:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:06.681 21:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:06.681 21:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:06.681 21:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:06.681 21:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:06.681 21:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:06.681 21:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:06.681 21:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:06.681 21:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:06.681 21:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:06.681 21:37:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.681 21:37:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.681 21:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:06.681 21:37:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.681 21:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:06.681 "name": "Existed_Raid", 00:10:06.681 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:06.681 "strip_size_kb": 64, 00:10:06.681 "state": "configuring", 00:10:06.681 "raid_level": "raid0", 00:10:06.681 "superblock": false, 00:10:06.681 "num_base_bdevs": 3, 00:10:06.681 "num_base_bdevs_discovered": 2, 00:10:06.681 "num_base_bdevs_operational": 3, 00:10:06.681 "base_bdevs_list": [ 00:10:06.681 { 00:10:06.681 "name": "BaseBdev1", 00:10:06.681 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:06.681 "is_configured": false, 00:10:06.681 "data_offset": 0, 00:10:06.681 "data_size": 0 00:10:06.681 }, 00:10:06.681 { 00:10:06.682 "name": "BaseBdev2", 00:10:06.682 "uuid": "14c1e83c-4b42-4473-871f-59b6e0dbc80c", 00:10:06.682 "is_configured": true, 00:10:06.682 "data_offset": 0, 00:10:06.682 "data_size": 65536 00:10:06.682 }, 00:10:06.682 { 00:10:06.682 "name": "BaseBdev3", 00:10:06.682 "uuid": "7afa9bd4-9a5d-4135-9783-fcf2316eba3e", 00:10:06.682 "is_configured": true, 00:10:06.682 "data_offset": 0, 00:10:06.682 "data_size": 65536 00:10:06.682 } 00:10:06.682 ] 00:10:06.682 }' 00:10:06.682 21:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:06.682 21:37:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.940 21:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:06.940 21:37:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.940 21:37:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.940 [2024-12-10 21:37:07.709400] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:06.940 21:37:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.940 21:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:06.940 21:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:06.940 21:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:06.940 21:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:06.940 21:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:06.940 21:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:06.940 21:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:06.940 21:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:06.940 21:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:06.940 21:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:06.940 21:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:07.197 21:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:07.197 21:37:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.198 21:37:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.198 21:37:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.198 21:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:07.198 "name": "Existed_Raid", 00:10:07.198 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:07.198 "strip_size_kb": 64, 00:10:07.198 "state": "configuring", 00:10:07.198 "raid_level": "raid0", 00:10:07.198 "superblock": false, 00:10:07.198 "num_base_bdevs": 3, 00:10:07.198 "num_base_bdevs_discovered": 1, 00:10:07.198 "num_base_bdevs_operational": 3, 00:10:07.198 "base_bdevs_list": [ 00:10:07.198 { 00:10:07.198 "name": "BaseBdev1", 00:10:07.198 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:07.198 "is_configured": false, 00:10:07.198 "data_offset": 0, 00:10:07.198 "data_size": 0 00:10:07.198 }, 00:10:07.198 { 00:10:07.198 "name": null, 00:10:07.198 "uuid": "14c1e83c-4b42-4473-871f-59b6e0dbc80c", 00:10:07.198 "is_configured": false, 00:10:07.198 "data_offset": 0, 00:10:07.198 "data_size": 65536 00:10:07.198 }, 00:10:07.198 { 00:10:07.198 "name": "BaseBdev3", 00:10:07.198 "uuid": "7afa9bd4-9a5d-4135-9783-fcf2316eba3e", 00:10:07.198 "is_configured": true, 00:10:07.198 "data_offset": 0, 00:10:07.198 "data_size": 65536 00:10:07.198 } 00:10:07.198 ] 00:10:07.198 }' 00:10:07.198 21:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:07.198 21:37:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.456 21:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:07.456 21:37:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.456 21:37:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.456 21:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:07.456 21:37:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.456 21:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:07.456 21:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:07.456 21:37:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.456 21:37:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.715 [2024-12-10 21:37:08.251058] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:07.715 BaseBdev1 00:10:07.715 21:37:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.715 21:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:07.715 21:37:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:07.715 21:37:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:07.715 21:37:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:07.715 21:37:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:07.715 21:37:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:07.715 21:37:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:07.715 21:37:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.715 21:37:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.715 21:37:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.715 21:37:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:07.715 21:37:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.715 21:37:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.715 [ 00:10:07.715 { 00:10:07.715 "name": "BaseBdev1", 00:10:07.715 "aliases": [ 00:10:07.715 "0a99c5a6-210e-4bbf-8f8c-3d16cc57491d" 00:10:07.715 ], 00:10:07.715 "product_name": "Malloc disk", 00:10:07.715 "block_size": 512, 00:10:07.715 "num_blocks": 65536, 00:10:07.715 "uuid": "0a99c5a6-210e-4bbf-8f8c-3d16cc57491d", 00:10:07.715 "assigned_rate_limits": { 00:10:07.715 "rw_ios_per_sec": 0, 00:10:07.715 "rw_mbytes_per_sec": 0, 00:10:07.715 "r_mbytes_per_sec": 0, 00:10:07.715 "w_mbytes_per_sec": 0 00:10:07.715 }, 00:10:07.715 "claimed": true, 00:10:07.715 "claim_type": "exclusive_write", 00:10:07.715 "zoned": false, 00:10:07.715 "supported_io_types": { 00:10:07.715 "read": true, 00:10:07.715 "write": true, 00:10:07.715 "unmap": true, 00:10:07.715 "flush": true, 00:10:07.715 "reset": true, 00:10:07.715 "nvme_admin": false, 00:10:07.715 "nvme_io": false, 00:10:07.715 "nvme_io_md": false, 00:10:07.715 "write_zeroes": true, 00:10:07.715 "zcopy": true, 00:10:07.715 "get_zone_info": false, 00:10:07.715 "zone_management": false, 00:10:07.715 "zone_append": false, 00:10:07.715 "compare": false, 00:10:07.715 "compare_and_write": false, 00:10:07.715 "abort": true, 00:10:07.715 "seek_hole": false, 00:10:07.715 "seek_data": false, 00:10:07.715 "copy": true, 00:10:07.715 "nvme_iov_md": false 00:10:07.715 }, 00:10:07.715 "memory_domains": [ 00:10:07.715 { 00:10:07.715 "dma_device_id": "system", 00:10:07.715 "dma_device_type": 1 00:10:07.715 }, 00:10:07.715 { 00:10:07.715 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:07.715 "dma_device_type": 2 00:10:07.715 } 00:10:07.715 ], 00:10:07.715 "driver_specific": {} 00:10:07.715 } 00:10:07.715 ] 00:10:07.715 21:37:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.715 21:37:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:07.715 21:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:07.715 21:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:07.715 21:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:07.715 21:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:07.715 21:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:07.715 21:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:07.715 21:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:07.715 21:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:07.715 21:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:07.715 21:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:07.715 21:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:07.715 21:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:07.715 21:37:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.715 21:37:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.715 21:37:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.715 21:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:07.715 "name": "Existed_Raid", 00:10:07.715 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:07.715 "strip_size_kb": 64, 00:10:07.715 "state": "configuring", 00:10:07.715 "raid_level": "raid0", 00:10:07.715 "superblock": false, 00:10:07.715 "num_base_bdevs": 3, 00:10:07.715 "num_base_bdevs_discovered": 2, 00:10:07.715 "num_base_bdevs_operational": 3, 00:10:07.715 "base_bdevs_list": [ 00:10:07.715 { 00:10:07.715 "name": "BaseBdev1", 00:10:07.715 "uuid": "0a99c5a6-210e-4bbf-8f8c-3d16cc57491d", 00:10:07.715 "is_configured": true, 00:10:07.715 "data_offset": 0, 00:10:07.715 "data_size": 65536 00:10:07.715 }, 00:10:07.715 { 00:10:07.715 "name": null, 00:10:07.715 "uuid": "14c1e83c-4b42-4473-871f-59b6e0dbc80c", 00:10:07.715 "is_configured": false, 00:10:07.715 "data_offset": 0, 00:10:07.715 "data_size": 65536 00:10:07.715 }, 00:10:07.715 { 00:10:07.715 "name": "BaseBdev3", 00:10:07.715 "uuid": "7afa9bd4-9a5d-4135-9783-fcf2316eba3e", 00:10:07.715 "is_configured": true, 00:10:07.715 "data_offset": 0, 00:10:07.715 "data_size": 65536 00:10:07.715 } 00:10:07.715 ] 00:10:07.715 }' 00:10:07.715 21:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:07.715 21:37:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.285 21:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:08.285 21:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:08.285 21:37:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.285 21:37:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.285 21:37:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.285 21:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:08.285 21:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:08.285 21:37:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.285 21:37:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.285 [2024-12-10 21:37:08.818187] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:08.285 21:37:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.285 21:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:08.285 21:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:08.285 21:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:08.285 21:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:08.285 21:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:08.285 21:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:08.285 21:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:08.285 21:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:08.285 21:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:08.285 21:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:08.285 21:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:08.285 21:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:08.285 21:37:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.285 21:37:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.285 21:37:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.285 21:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:08.285 "name": "Existed_Raid", 00:10:08.285 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:08.285 "strip_size_kb": 64, 00:10:08.285 "state": "configuring", 00:10:08.285 "raid_level": "raid0", 00:10:08.285 "superblock": false, 00:10:08.285 "num_base_bdevs": 3, 00:10:08.285 "num_base_bdevs_discovered": 1, 00:10:08.285 "num_base_bdevs_operational": 3, 00:10:08.285 "base_bdevs_list": [ 00:10:08.285 { 00:10:08.285 "name": "BaseBdev1", 00:10:08.285 "uuid": "0a99c5a6-210e-4bbf-8f8c-3d16cc57491d", 00:10:08.285 "is_configured": true, 00:10:08.285 "data_offset": 0, 00:10:08.285 "data_size": 65536 00:10:08.285 }, 00:10:08.285 { 00:10:08.285 "name": null, 00:10:08.285 "uuid": "14c1e83c-4b42-4473-871f-59b6e0dbc80c", 00:10:08.285 "is_configured": false, 00:10:08.285 "data_offset": 0, 00:10:08.285 "data_size": 65536 00:10:08.285 }, 00:10:08.285 { 00:10:08.285 "name": null, 00:10:08.285 "uuid": "7afa9bd4-9a5d-4135-9783-fcf2316eba3e", 00:10:08.285 "is_configured": false, 00:10:08.285 "data_offset": 0, 00:10:08.285 "data_size": 65536 00:10:08.285 } 00:10:08.285 ] 00:10:08.285 }' 00:10:08.285 21:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:08.285 21:37:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.545 21:37:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:08.545 21:37:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:08.545 21:37:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.545 21:37:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.804 21:37:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.804 21:37:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:08.805 21:37:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:08.805 21:37:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.805 21:37:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.805 [2024-12-10 21:37:09.361313] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:08.805 21:37:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.805 21:37:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:08.805 21:37:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:08.805 21:37:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:08.805 21:37:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:08.805 21:37:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:08.805 21:37:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:08.805 21:37:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:08.805 21:37:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:08.805 21:37:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:08.805 21:37:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:08.805 21:37:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:08.805 21:37:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:08.805 21:37:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.805 21:37:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.805 21:37:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.805 21:37:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:08.805 "name": "Existed_Raid", 00:10:08.805 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:08.805 "strip_size_kb": 64, 00:10:08.805 "state": "configuring", 00:10:08.805 "raid_level": "raid0", 00:10:08.805 "superblock": false, 00:10:08.805 "num_base_bdevs": 3, 00:10:08.805 "num_base_bdevs_discovered": 2, 00:10:08.805 "num_base_bdevs_operational": 3, 00:10:08.805 "base_bdevs_list": [ 00:10:08.805 { 00:10:08.805 "name": "BaseBdev1", 00:10:08.805 "uuid": "0a99c5a6-210e-4bbf-8f8c-3d16cc57491d", 00:10:08.805 "is_configured": true, 00:10:08.805 "data_offset": 0, 00:10:08.805 "data_size": 65536 00:10:08.805 }, 00:10:08.805 { 00:10:08.805 "name": null, 00:10:08.805 "uuid": "14c1e83c-4b42-4473-871f-59b6e0dbc80c", 00:10:08.805 "is_configured": false, 00:10:08.805 "data_offset": 0, 00:10:08.805 "data_size": 65536 00:10:08.805 }, 00:10:08.805 { 00:10:08.805 "name": "BaseBdev3", 00:10:08.805 "uuid": "7afa9bd4-9a5d-4135-9783-fcf2316eba3e", 00:10:08.805 "is_configured": true, 00:10:08.805 "data_offset": 0, 00:10:08.805 "data_size": 65536 00:10:08.805 } 00:10:08.805 ] 00:10:08.805 }' 00:10:08.805 21:37:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:08.805 21:37:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.065 21:37:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:09.065 21:37:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:09.065 21:37:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.065 21:37:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.065 21:37:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.326 21:37:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:09.326 21:37:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:09.326 21:37:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.326 21:37:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.326 [2024-12-10 21:37:09.852525] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:09.326 21:37:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.326 21:37:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:09.326 21:37:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:09.326 21:37:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:09.326 21:37:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:09.326 21:37:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:09.326 21:37:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:09.326 21:37:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:09.326 21:37:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:09.326 21:37:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:09.326 21:37:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:09.326 21:37:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:09.327 21:37:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:09.327 21:37:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.327 21:37:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.327 21:37:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.327 21:37:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:09.327 "name": "Existed_Raid", 00:10:09.327 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:09.327 "strip_size_kb": 64, 00:10:09.327 "state": "configuring", 00:10:09.327 "raid_level": "raid0", 00:10:09.327 "superblock": false, 00:10:09.327 "num_base_bdevs": 3, 00:10:09.327 "num_base_bdevs_discovered": 1, 00:10:09.327 "num_base_bdevs_operational": 3, 00:10:09.327 "base_bdevs_list": [ 00:10:09.327 { 00:10:09.327 "name": null, 00:10:09.327 "uuid": "0a99c5a6-210e-4bbf-8f8c-3d16cc57491d", 00:10:09.327 "is_configured": false, 00:10:09.327 "data_offset": 0, 00:10:09.327 "data_size": 65536 00:10:09.327 }, 00:10:09.327 { 00:10:09.327 "name": null, 00:10:09.327 "uuid": "14c1e83c-4b42-4473-871f-59b6e0dbc80c", 00:10:09.327 "is_configured": false, 00:10:09.327 "data_offset": 0, 00:10:09.327 "data_size": 65536 00:10:09.327 }, 00:10:09.327 { 00:10:09.327 "name": "BaseBdev3", 00:10:09.327 "uuid": "7afa9bd4-9a5d-4135-9783-fcf2316eba3e", 00:10:09.327 "is_configured": true, 00:10:09.327 "data_offset": 0, 00:10:09.327 "data_size": 65536 00:10:09.327 } 00:10:09.327 ] 00:10:09.327 }' 00:10:09.327 21:37:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:09.327 21:37:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.896 21:37:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:09.896 21:37:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.896 21:37:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.896 21:37:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:09.896 21:37:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.896 21:37:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:09.896 21:37:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:09.896 21:37:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.896 21:37:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.896 [2024-12-10 21:37:10.457829] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:09.896 21:37:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.896 21:37:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:09.896 21:37:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:09.896 21:37:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:09.896 21:37:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:09.896 21:37:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:09.896 21:37:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:09.896 21:37:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:09.896 21:37:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:09.896 21:37:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:09.896 21:37:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:09.896 21:37:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:09.896 21:37:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.896 21:37:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.896 21:37:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:09.896 21:37:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.896 21:37:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:09.896 "name": "Existed_Raid", 00:10:09.896 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:09.896 "strip_size_kb": 64, 00:10:09.896 "state": "configuring", 00:10:09.896 "raid_level": "raid0", 00:10:09.896 "superblock": false, 00:10:09.896 "num_base_bdevs": 3, 00:10:09.896 "num_base_bdevs_discovered": 2, 00:10:09.896 "num_base_bdevs_operational": 3, 00:10:09.896 "base_bdevs_list": [ 00:10:09.896 { 00:10:09.896 "name": null, 00:10:09.896 "uuid": "0a99c5a6-210e-4bbf-8f8c-3d16cc57491d", 00:10:09.896 "is_configured": false, 00:10:09.896 "data_offset": 0, 00:10:09.896 "data_size": 65536 00:10:09.896 }, 00:10:09.896 { 00:10:09.896 "name": "BaseBdev2", 00:10:09.896 "uuid": "14c1e83c-4b42-4473-871f-59b6e0dbc80c", 00:10:09.896 "is_configured": true, 00:10:09.896 "data_offset": 0, 00:10:09.896 "data_size": 65536 00:10:09.896 }, 00:10:09.896 { 00:10:09.896 "name": "BaseBdev3", 00:10:09.896 "uuid": "7afa9bd4-9a5d-4135-9783-fcf2316eba3e", 00:10:09.896 "is_configured": true, 00:10:09.896 "data_offset": 0, 00:10:09.896 "data_size": 65536 00:10:09.896 } 00:10:09.896 ] 00:10:09.896 }' 00:10:09.896 21:37:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:09.896 21:37:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.156 21:37:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:10.156 21:37:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:10.156 21:37:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.156 21:37:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.415 21:37:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.415 21:37:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:10.415 21:37:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:10.415 21:37:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:10.415 21:37:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.415 21:37:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.415 21:37:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.415 21:37:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 0a99c5a6-210e-4bbf-8f8c-3d16cc57491d 00:10:10.415 21:37:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.415 21:37:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.415 [2024-12-10 21:37:11.066883] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:10.415 [2024-12-10 21:37:11.066931] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:10.415 [2024-12-10 21:37:11.066940] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:10:10.415 [2024-12-10 21:37:11.067185] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:10:10.415 [2024-12-10 21:37:11.067329] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:10.415 [2024-12-10 21:37:11.067340] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:10:10.415 [2024-12-10 21:37:11.067752] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:10.415 NewBaseBdev 00:10:10.415 21:37:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.415 21:37:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:10.415 21:37:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:10:10.415 21:37:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:10.415 21:37:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:10.415 21:37:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:10.415 21:37:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:10.415 21:37:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:10.415 21:37:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.415 21:37:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.415 21:37:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.415 21:37:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:10.415 21:37:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.415 21:37:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.415 [ 00:10:10.416 { 00:10:10.416 "name": "NewBaseBdev", 00:10:10.416 "aliases": [ 00:10:10.416 "0a99c5a6-210e-4bbf-8f8c-3d16cc57491d" 00:10:10.416 ], 00:10:10.416 "product_name": "Malloc disk", 00:10:10.416 "block_size": 512, 00:10:10.416 "num_blocks": 65536, 00:10:10.416 "uuid": "0a99c5a6-210e-4bbf-8f8c-3d16cc57491d", 00:10:10.416 "assigned_rate_limits": { 00:10:10.416 "rw_ios_per_sec": 0, 00:10:10.416 "rw_mbytes_per_sec": 0, 00:10:10.416 "r_mbytes_per_sec": 0, 00:10:10.416 "w_mbytes_per_sec": 0 00:10:10.416 }, 00:10:10.416 "claimed": true, 00:10:10.416 "claim_type": "exclusive_write", 00:10:10.416 "zoned": false, 00:10:10.416 "supported_io_types": { 00:10:10.416 "read": true, 00:10:10.416 "write": true, 00:10:10.416 "unmap": true, 00:10:10.416 "flush": true, 00:10:10.416 "reset": true, 00:10:10.416 "nvme_admin": false, 00:10:10.416 "nvme_io": false, 00:10:10.416 "nvme_io_md": false, 00:10:10.416 "write_zeroes": true, 00:10:10.416 "zcopy": true, 00:10:10.416 "get_zone_info": false, 00:10:10.416 "zone_management": false, 00:10:10.416 "zone_append": false, 00:10:10.416 "compare": false, 00:10:10.416 "compare_and_write": false, 00:10:10.416 "abort": true, 00:10:10.416 "seek_hole": false, 00:10:10.416 "seek_data": false, 00:10:10.416 "copy": true, 00:10:10.416 "nvme_iov_md": false 00:10:10.416 }, 00:10:10.416 "memory_domains": [ 00:10:10.416 { 00:10:10.416 "dma_device_id": "system", 00:10:10.416 "dma_device_type": 1 00:10:10.416 }, 00:10:10.416 { 00:10:10.416 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:10.416 "dma_device_type": 2 00:10:10.416 } 00:10:10.416 ], 00:10:10.416 "driver_specific": {} 00:10:10.416 } 00:10:10.416 ] 00:10:10.416 21:37:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.416 21:37:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:10.416 21:37:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:10:10.416 21:37:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:10.416 21:37:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:10.416 21:37:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:10.416 21:37:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:10.416 21:37:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:10.416 21:37:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:10.416 21:37:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:10.416 21:37:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:10.416 21:37:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:10.416 21:37:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:10.416 21:37:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:10.416 21:37:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.416 21:37:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.416 21:37:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.416 21:37:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:10.416 "name": "Existed_Raid", 00:10:10.416 "uuid": "b3b69e98-9091-40da-9966-2cb0998ffcdf", 00:10:10.416 "strip_size_kb": 64, 00:10:10.416 "state": "online", 00:10:10.416 "raid_level": "raid0", 00:10:10.416 "superblock": false, 00:10:10.416 "num_base_bdevs": 3, 00:10:10.416 "num_base_bdevs_discovered": 3, 00:10:10.416 "num_base_bdevs_operational": 3, 00:10:10.416 "base_bdevs_list": [ 00:10:10.416 { 00:10:10.416 "name": "NewBaseBdev", 00:10:10.416 "uuid": "0a99c5a6-210e-4bbf-8f8c-3d16cc57491d", 00:10:10.416 "is_configured": true, 00:10:10.416 "data_offset": 0, 00:10:10.416 "data_size": 65536 00:10:10.416 }, 00:10:10.416 { 00:10:10.416 "name": "BaseBdev2", 00:10:10.416 "uuid": "14c1e83c-4b42-4473-871f-59b6e0dbc80c", 00:10:10.416 "is_configured": true, 00:10:10.416 "data_offset": 0, 00:10:10.416 "data_size": 65536 00:10:10.416 }, 00:10:10.416 { 00:10:10.416 "name": "BaseBdev3", 00:10:10.416 "uuid": "7afa9bd4-9a5d-4135-9783-fcf2316eba3e", 00:10:10.416 "is_configured": true, 00:10:10.416 "data_offset": 0, 00:10:10.416 "data_size": 65536 00:10:10.416 } 00:10:10.416 ] 00:10:10.416 }' 00:10:10.416 21:37:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:10.416 21:37:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.983 21:37:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:10.983 21:37:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:10.983 21:37:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:10.983 21:37:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:10.983 21:37:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:10.983 21:37:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:10.983 21:37:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:10.983 21:37:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.983 21:37:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.983 21:37:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:10.983 [2024-12-10 21:37:11.586486] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:10.983 21:37:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.983 21:37:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:10.983 "name": "Existed_Raid", 00:10:10.983 "aliases": [ 00:10:10.983 "b3b69e98-9091-40da-9966-2cb0998ffcdf" 00:10:10.983 ], 00:10:10.983 "product_name": "Raid Volume", 00:10:10.983 "block_size": 512, 00:10:10.983 "num_blocks": 196608, 00:10:10.983 "uuid": "b3b69e98-9091-40da-9966-2cb0998ffcdf", 00:10:10.983 "assigned_rate_limits": { 00:10:10.983 "rw_ios_per_sec": 0, 00:10:10.983 "rw_mbytes_per_sec": 0, 00:10:10.983 "r_mbytes_per_sec": 0, 00:10:10.983 "w_mbytes_per_sec": 0 00:10:10.983 }, 00:10:10.983 "claimed": false, 00:10:10.983 "zoned": false, 00:10:10.983 "supported_io_types": { 00:10:10.983 "read": true, 00:10:10.983 "write": true, 00:10:10.983 "unmap": true, 00:10:10.983 "flush": true, 00:10:10.983 "reset": true, 00:10:10.983 "nvme_admin": false, 00:10:10.983 "nvme_io": false, 00:10:10.983 "nvme_io_md": false, 00:10:10.983 "write_zeroes": true, 00:10:10.983 "zcopy": false, 00:10:10.983 "get_zone_info": false, 00:10:10.983 "zone_management": false, 00:10:10.983 "zone_append": false, 00:10:10.983 "compare": false, 00:10:10.983 "compare_and_write": false, 00:10:10.983 "abort": false, 00:10:10.983 "seek_hole": false, 00:10:10.983 "seek_data": false, 00:10:10.983 "copy": false, 00:10:10.983 "nvme_iov_md": false 00:10:10.983 }, 00:10:10.983 "memory_domains": [ 00:10:10.983 { 00:10:10.983 "dma_device_id": "system", 00:10:10.983 "dma_device_type": 1 00:10:10.983 }, 00:10:10.983 { 00:10:10.983 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:10.983 "dma_device_type": 2 00:10:10.983 }, 00:10:10.983 { 00:10:10.983 "dma_device_id": "system", 00:10:10.983 "dma_device_type": 1 00:10:10.983 }, 00:10:10.983 { 00:10:10.983 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:10.984 "dma_device_type": 2 00:10:10.984 }, 00:10:10.984 { 00:10:10.984 "dma_device_id": "system", 00:10:10.984 "dma_device_type": 1 00:10:10.984 }, 00:10:10.984 { 00:10:10.984 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:10.984 "dma_device_type": 2 00:10:10.984 } 00:10:10.984 ], 00:10:10.984 "driver_specific": { 00:10:10.984 "raid": { 00:10:10.984 "uuid": "b3b69e98-9091-40da-9966-2cb0998ffcdf", 00:10:10.984 "strip_size_kb": 64, 00:10:10.984 "state": "online", 00:10:10.984 "raid_level": "raid0", 00:10:10.984 "superblock": false, 00:10:10.984 "num_base_bdevs": 3, 00:10:10.984 "num_base_bdevs_discovered": 3, 00:10:10.984 "num_base_bdevs_operational": 3, 00:10:10.984 "base_bdevs_list": [ 00:10:10.984 { 00:10:10.984 "name": "NewBaseBdev", 00:10:10.984 "uuid": "0a99c5a6-210e-4bbf-8f8c-3d16cc57491d", 00:10:10.984 "is_configured": true, 00:10:10.984 "data_offset": 0, 00:10:10.984 "data_size": 65536 00:10:10.984 }, 00:10:10.984 { 00:10:10.984 "name": "BaseBdev2", 00:10:10.984 "uuid": "14c1e83c-4b42-4473-871f-59b6e0dbc80c", 00:10:10.984 "is_configured": true, 00:10:10.984 "data_offset": 0, 00:10:10.984 "data_size": 65536 00:10:10.984 }, 00:10:10.984 { 00:10:10.984 "name": "BaseBdev3", 00:10:10.984 "uuid": "7afa9bd4-9a5d-4135-9783-fcf2316eba3e", 00:10:10.984 "is_configured": true, 00:10:10.984 "data_offset": 0, 00:10:10.984 "data_size": 65536 00:10:10.984 } 00:10:10.984 ] 00:10:10.984 } 00:10:10.984 } 00:10:10.984 }' 00:10:10.984 21:37:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:10.984 21:37:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:10.984 BaseBdev2 00:10:10.984 BaseBdev3' 00:10:10.984 21:37:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:10.984 21:37:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:10.984 21:37:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:10.984 21:37:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:10.984 21:37:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:10.984 21:37:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.984 21:37:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.984 21:37:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.243 21:37:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:11.243 21:37:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:11.243 21:37:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:11.243 21:37:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:11.243 21:37:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:11.243 21:37:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.243 21:37:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.243 21:37:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.243 21:37:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:11.243 21:37:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:11.243 21:37:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:11.243 21:37:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:11.243 21:37:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:11.243 21:37:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.243 21:37:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.243 21:37:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.243 21:37:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:11.243 21:37:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:11.243 21:37:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:11.243 21:37:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.243 21:37:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.243 [2024-12-10 21:37:11.885628] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:11.243 [2024-12-10 21:37:11.885656] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:11.243 [2024-12-10 21:37:11.885744] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:11.243 [2024-12-10 21:37:11.885800] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:11.243 [2024-12-10 21:37:11.885812] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:10:11.243 21:37:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.243 21:37:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 63919 00:10:11.243 21:37:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 63919 ']' 00:10:11.243 21:37:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 63919 00:10:11.243 21:37:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:10:11.243 21:37:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:11.243 21:37:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63919 00:10:11.243 21:37:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:11.243 21:37:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:11.243 killing process with pid 63919 00:10:11.243 21:37:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63919' 00:10:11.243 21:37:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 63919 00:10:11.243 [2024-12-10 21:37:11.937074] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:11.243 21:37:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 63919 00:10:11.502 [2024-12-10 21:37:12.258011] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:12.882 ************************************ 00:10:12.882 END TEST raid_state_function_test 00:10:12.882 ************************************ 00:10:12.882 21:37:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:10:12.882 00:10:12.882 real 0m11.082s 00:10:12.882 user 0m17.553s 00:10:12.882 sys 0m1.948s 00:10:12.882 21:37:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:12.882 21:37:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.882 21:37:13 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 3 true 00:10:12.882 21:37:13 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:12.882 21:37:13 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:12.882 21:37:13 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:12.882 ************************************ 00:10:12.882 START TEST raid_state_function_test_sb 00:10:12.882 ************************************ 00:10:12.882 21:37:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 3 true 00:10:12.882 21:37:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:10:12.882 21:37:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:10:12.882 21:37:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:10:12.882 21:37:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:12.882 21:37:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:12.882 21:37:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:12.882 21:37:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:12.882 21:37:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:12.882 21:37:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:12.882 21:37:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:12.882 21:37:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:12.882 21:37:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:12.882 21:37:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:12.882 21:37:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:12.882 21:37:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:12.882 21:37:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:12.882 21:37:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:12.882 21:37:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:12.882 21:37:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:12.882 21:37:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:12.882 21:37:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:12.882 21:37:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:10:12.882 21:37:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:12.882 21:37:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:12.882 21:37:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:10:12.882 21:37:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:10:12.882 21:37:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=64540 00:10:12.882 21:37:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:12.882 21:37:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 64540' 00:10:12.882 Process raid pid: 64540 00:10:12.882 21:37:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 64540 00:10:12.882 21:37:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 64540 ']' 00:10:12.882 21:37:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:12.882 21:37:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:12.882 21:37:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:12.882 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:12.882 21:37:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:12.882 21:37:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.882 [2024-12-10 21:37:13.635225] Starting SPDK v25.01-pre git sha1 cec5ba284 / DPDK 24.03.0 initialization... 00:10:12.882 [2024-12-10 21:37:13.635467] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:13.142 [2024-12-10 21:37:13.810701] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:13.401 [2024-12-10 21:37:13.945228] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:10:13.401 [2024-12-10 21:37:14.176391] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:13.401 [2024-12-10 21:37:14.176453] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:13.970 21:37:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:13.970 21:37:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:10:13.970 21:37:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:13.970 21:37:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.970 21:37:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.970 [2024-12-10 21:37:14.568827] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:13.970 [2024-12-10 21:37:14.568962] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:13.970 [2024-12-10 21:37:14.569000] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:13.970 [2024-12-10 21:37:14.569029] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:13.970 [2024-12-10 21:37:14.569053] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:13.970 [2024-12-10 21:37:14.569078] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:13.970 21:37:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.970 21:37:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:13.970 21:37:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:13.970 21:37:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:13.970 21:37:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:13.970 21:37:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:13.970 21:37:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:13.970 21:37:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:13.970 21:37:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:13.970 21:37:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:13.970 21:37:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:13.970 21:37:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:13.970 21:37:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:13.970 21:37:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.970 21:37:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.970 21:37:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.970 21:37:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:13.970 "name": "Existed_Raid", 00:10:13.970 "uuid": "0b83c986-4b96-415b-8db7-64f91b683310", 00:10:13.970 "strip_size_kb": 64, 00:10:13.970 "state": "configuring", 00:10:13.970 "raid_level": "raid0", 00:10:13.970 "superblock": true, 00:10:13.970 "num_base_bdevs": 3, 00:10:13.970 "num_base_bdevs_discovered": 0, 00:10:13.970 "num_base_bdevs_operational": 3, 00:10:13.970 "base_bdevs_list": [ 00:10:13.970 { 00:10:13.970 "name": "BaseBdev1", 00:10:13.970 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:13.970 "is_configured": false, 00:10:13.970 "data_offset": 0, 00:10:13.970 "data_size": 0 00:10:13.970 }, 00:10:13.970 { 00:10:13.970 "name": "BaseBdev2", 00:10:13.970 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:13.970 "is_configured": false, 00:10:13.970 "data_offset": 0, 00:10:13.970 "data_size": 0 00:10:13.970 }, 00:10:13.970 { 00:10:13.970 "name": "BaseBdev3", 00:10:13.970 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:13.970 "is_configured": false, 00:10:13.970 "data_offset": 0, 00:10:13.970 "data_size": 0 00:10:13.970 } 00:10:13.970 ] 00:10:13.970 }' 00:10:13.970 21:37:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:13.970 21:37:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.542 21:37:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:14.542 21:37:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.542 21:37:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.542 [2024-12-10 21:37:15.043995] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:14.542 [2024-12-10 21:37:15.044036] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:14.542 21:37:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.542 21:37:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:14.542 21:37:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.542 21:37:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.542 [2024-12-10 21:37:15.056022] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:14.542 [2024-12-10 21:37:15.056079] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:14.542 [2024-12-10 21:37:15.056089] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:14.542 [2024-12-10 21:37:15.056100] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:14.542 [2024-12-10 21:37:15.056108] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:14.542 [2024-12-10 21:37:15.056119] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:14.542 21:37:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.542 21:37:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:14.543 21:37:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.543 21:37:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.543 [2024-12-10 21:37:15.106562] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:14.543 BaseBdev1 00:10:14.543 21:37:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.543 21:37:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:14.543 21:37:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:14.543 21:37:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:14.543 21:37:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:14.543 21:37:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:14.543 21:37:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:14.543 21:37:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:14.543 21:37:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.543 21:37:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.543 21:37:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.543 21:37:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:14.543 21:37:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.543 21:37:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.543 [ 00:10:14.543 { 00:10:14.543 "name": "BaseBdev1", 00:10:14.543 "aliases": [ 00:10:14.543 "51d0763a-2504-40f5-8917-063eb75cb9a4" 00:10:14.543 ], 00:10:14.543 "product_name": "Malloc disk", 00:10:14.543 "block_size": 512, 00:10:14.543 "num_blocks": 65536, 00:10:14.543 "uuid": "51d0763a-2504-40f5-8917-063eb75cb9a4", 00:10:14.543 "assigned_rate_limits": { 00:10:14.543 "rw_ios_per_sec": 0, 00:10:14.543 "rw_mbytes_per_sec": 0, 00:10:14.543 "r_mbytes_per_sec": 0, 00:10:14.543 "w_mbytes_per_sec": 0 00:10:14.543 }, 00:10:14.543 "claimed": true, 00:10:14.543 "claim_type": "exclusive_write", 00:10:14.543 "zoned": false, 00:10:14.543 "supported_io_types": { 00:10:14.543 "read": true, 00:10:14.543 "write": true, 00:10:14.543 "unmap": true, 00:10:14.543 "flush": true, 00:10:14.543 "reset": true, 00:10:14.543 "nvme_admin": false, 00:10:14.543 "nvme_io": false, 00:10:14.543 "nvme_io_md": false, 00:10:14.543 "write_zeroes": true, 00:10:14.543 "zcopy": true, 00:10:14.543 "get_zone_info": false, 00:10:14.543 "zone_management": false, 00:10:14.543 "zone_append": false, 00:10:14.543 "compare": false, 00:10:14.543 "compare_and_write": false, 00:10:14.543 "abort": true, 00:10:14.543 "seek_hole": false, 00:10:14.543 "seek_data": false, 00:10:14.543 "copy": true, 00:10:14.543 "nvme_iov_md": false 00:10:14.543 }, 00:10:14.543 "memory_domains": [ 00:10:14.543 { 00:10:14.543 "dma_device_id": "system", 00:10:14.543 "dma_device_type": 1 00:10:14.543 }, 00:10:14.543 { 00:10:14.543 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:14.543 "dma_device_type": 2 00:10:14.543 } 00:10:14.543 ], 00:10:14.543 "driver_specific": {} 00:10:14.543 } 00:10:14.543 ] 00:10:14.543 21:37:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.543 21:37:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:14.543 21:37:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:14.543 21:37:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:14.543 21:37:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:14.543 21:37:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:14.543 21:37:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:14.543 21:37:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:14.544 21:37:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:14.544 21:37:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:14.544 21:37:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:14.544 21:37:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:14.544 21:37:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:14.544 21:37:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:14.544 21:37:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.544 21:37:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.544 21:37:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.544 21:37:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:14.544 "name": "Existed_Raid", 00:10:14.544 "uuid": "d68518ca-fa6e-4329-a296-77d93fd701fd", 00:10:14.544 "strip_size_kb": 64, 00:10:14.544 "state": "configuring", 00:10:14.544 "raid_level": "raid0", 00:10:14.544 "superblock": true, 00:10:14.544 "num_base_bdevs": 3, 00:10:14.544 "num_base_bdevs_discovered": 1, 00:10:14.544 "num_base_bdevs_operational": 3, 00:10:14.544 "base_bdevs_list": [ 00:10:14.544 { 00:10:14.544 "name": "BaseBdev1", 00:10:14.544 "uuid": "51d0763a-2504-40f5-8917-063eb75cb9a4", 00:10:14.544 "is_configured": true, 00:10:14.544 "data_offset": 2048, 00:10:14.544 "data_size": 63488 00:10:14.544 }, 00:10:14.544 { 00:10:14.544 "name": "BaseBdev2", 00:10:14.544 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:14.544 "is_configured": false, 00:10:14.544 "data_offset": 0, 00:10:14.544 "data_size": 0 00:10:14.544 }, 00:10:14.544 { 00:10:14.544 "name": "BaseBdev3", 00:10:14.544 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:14.544 "is_configured": false, 00:10:14.544 "data_offset": 0, 00:10:14.544 "data_size": 0 00:10:14.544 } 00:10:14.544 ] 00:10:14.544 }' 00:10:14.544 21:37:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:14.544 21:37:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.805 21:37:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:14.805 21:37:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.805 21:37:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.805 [2024-12-10 21:37:15.581791] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:14.805 [2024-12-10 21:37:15.581917] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:14.805 21:37:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.065 21:37:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:15.065 21:37:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.065 21:37:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.065 [2024-12-10 21:37:15.589828] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:15.065 [2024-12-10 21:37:15.591709] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:15.065 [2024-12-10 21:37:15.591817] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:15.065 [2024-12-10 21:37:15.591853] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:15.065 [2024-12-10 21:37:15.591880] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:15.065 21:37:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.066 21:37:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:15.066 21:37:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:15.066 21:37:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:15.066 21:37:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:15.066 21:37:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:15.066 21:37:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:15.066 21:37:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:15.066 21:37:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:15.066 21:37:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:15.066 21:37:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:15.066 21:37:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:15.066 21:37:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:15.066 21:37:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:15.066 21:37:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:15.066 21:37:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.066 21:37:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.066 21:37:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.066 21:37:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:15.066 "name": "Existed_Raid", 00:10:15.066 "uuid": "babc426b-1831-4066-b83f-74bb138b1156", 00:10:15.066 "strip_size_kb": 64, 00:10:15.066 "state": "configuring", 00:10:15.066 "raid_level": "raid0", 00:10:15.066 "superblock": true, 00:10:15.066 "num_base_bdevs": 3, 00:10:15.066 "num_base_bdevs_discovered": 1, 00:10:15.066 "num_base_bdevs_operational": 3, 00:10:15.066 "base_bdevs_list": [ 00:10:15.066 { 00:10:15.066 "name": "BaseBdev1", 00:10:15.066 "uuid": "51d0763a-2504-40f5-8917-063eb75cb9a4", 00:10:15.066 "is_configured": true, 00:10:15.066 "data_offset": 2048, 00:10:15.066 "data_size": 63488 00:10:15.066 }, 00:10:15.066 { 00:10:15.066 "name": "BaseBdev2", 00:10:15.066 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:15.066 "is_configured": false, 00:10:15.066 "data_offset": 0, 00:10:15.066 "data_size": 0 00:10:15.066 }, 00:10:15.066 { 00:10:15.066 "name": "BaseBdev3", 00:10:15.066 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:15.066 "is_configured": false, 00:10:15.066 "data_offset": 0, 00:10:15.066 "data_size": 0 00:10:15.066 } 00:10:15.066 ] 00:10:15.066 }' 00:10:15.066 21:37:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:15.066 21:37:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.326 21:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:15.326 21:37:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.326 21:37:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.326 [2024-12-10 21:37:16.052642] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:15.326 BaseBdev2 00:10:15.326 21:37:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.326 21:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:15.326 21:37:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:15.326 21:37:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:15.326 21:37:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:15.326 21:37:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:15.326 21:37:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:15.326 21:37:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:15.326 21:37:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.326 21:37:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.326 21:37:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.326 21:37:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:15.326 21:37:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.327 21:37:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.327 [ 00:10:15.327 { 00:10:15.327 "name": "BaseBdev2", 00:10:15.327 "aliases": [ 00:10:15.327 "4f596387-b626-4220-8595-a81aa4c69e4d" 00:10:15.327 ], 00:10:15.327 "product_name": "Malloc disk", 00:10:15.327 "block_size": 512, 00:10:15.327 "num_blocks": 65536, 00:10:15.327 "uuid": "4f596387-b626-4220-8595-a81aa4c69e4d", 00:10:15.327 "assigned_rate_limits": { 00:10:15.327 "rw_ios_per_sec": 0, 00:10:15.327 "rw_mbytes_per_sec": 0, 00:10:15.327 "r_mbytes_per_sec": 0, 00:10:15.327 "w_mbytes_per_sec": 0 00:10:15.327 }, 00:10:15.327 "claimed": true, 00:10:15.327 "claim_type": "exclusive_write", 00:10:15.327 "zoned": false, 00:10:15.327 "supported_io_types": { 00:10:15.327 "read": true, 00:10:15.327 "write": true, 00:10:15.327 "unmap": true, 00:10:15.327 "flush": true, 00:10:15.327 "reset": true, 00:10:15.327 "nvme_admin": false, 00:10:15.327 "nvme_io": false, 00:10:15.327 "nvme_io_md": false, 00:10:15.327 "write_zeroes": true, 00:10:15.327 "zcopy": true, 00:10:15.327 "get_zone_info": false, 00:10:15.327 "zone_management": false, 00:10:15.327 "zone_append": false, 00:10:15.327 "compare": false, 00:10:15.327 "compare_and_write": false, 00:10:15.327 "abort": true, 00:10:15.327 "seek_hole": false, 00:10:15.327 "seek_data": false, 00:10:15.327 "copy": true, 00:10:15.327 "nvme_iov_md": false 00:10:15.327 }, 00:10:15.327 "memory_domains": [ 00:10:15.327 { 00:10:15.327 "dma_device_id": "system", 00:10:15.327 "dma_device_type": 1 00:10:15.327 }, 00:10:15.327 { 00:10:15.327 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:15.327 "dma_device_type": 2 00:10:15.327 } 00:10:15.327 ], 00:10:15.327 "driver_specific": {} 00:10:15.327 } 00:10:15.327 ] 00:10:15.327 21:37:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.327 21:37:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:15.327 21:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:15.327 21:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:15.327 21:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:15.327 21:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:15.327 21:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:15.327 21:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:15.327 21:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:15.327 21:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:15.327 21:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:15.327 21:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:15.327 21:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:15.327 21:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:15.327 21:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:15.327 21:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:15.327 21:37:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.327 21:37:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.587 21:37:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.587 21:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:15.587 "name": "Existed_Raid", 00:10:15.587 "uuid": "babc426b-1831-4066-b83f-74bb138b1156", 00:10:15.587 "strip_size_kb": 64, 00:10:15.587 "state": "configuring", 00:10:15.587 "raid_level": "raid0", 00:10:15.587 "superblock": true, 00:10:15.587 "num_base_bdevs": 3, 00:10:15.587 "num_base_bdevs_discovered": 2, 00:10:15.587 "num_base_bdevs_operational": 3, 00:10:15.587 "base_bdevs_list": [ 00:10:15.587 { 00:10:15.587 "name": "BaseBdev1", 00:10:15.587 "uuid": "51d0763a-2504-40f5-8917-063eb75cb9a4", 00:10:15.587 "is_configured": true, 00:10:15.587 "data_offset": 2048, 00:10:15.587 "data_size": 63488 00:10:15.587 }, 00:10:15.587 { 00:10:15.587 "name": "BaseBdev2", 00:10:15.587 "uuid": "4f596387-b626-4220-8595-a81aa4c69e4d", 00:10:15.587 "is_configured": true, 00:10:15.587 "data_offset": 2048, 00:10:15.587 "data_size": 63488 00:10:15.587 }, 00:10:15.587 { 00:10:15.587 "name": "BaseBdev3", 00:10:15.587 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:15.587 "is_configured": false, 00:10:15.587 "data_offset": 0, 00:10:15.587 "data_size": 0 00:10:15.587 } 00:10:15.587 ] 00:10:15.587 }' 00:10:15.587 21:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:15.587 21:37:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.847 21:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:15.847 21:37:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.847 21:37:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.847 [2024-12-10 21:37:16.594476] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:15.847 [2024-12-10 21:37:16.594868] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:15.847 [2024-12-10 21:37:16.594935] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:15.847 [2024-12-10 21:37:16.595288] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:15.847 BaseBdev3 00:10:15.847 [2024-12-10 21:37:16.595535] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:15.847 [2024-12-10 21:37:16.595550] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:15.847 [2024-12-10 21:37:16.595756] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:15.847 21:37:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.847 21:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:15.847 21:37:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:15.847 21:37:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:15.847 21:37:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:15.847 21:37:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:15.847 21:37:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:15.847 21:37:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:15.847 21:37:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.847 21:37:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.847 21:37:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.847 21:37:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:15.847 21:37:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.847 21:37:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.847 [ 00:10:15.847 { 00:10:15.847 "name": "BaseBdev3", 00:10:15.847 "aliases": [ 00:10:15.847 "3ec173d7-aa08-4e4c-835a-0c630baf3f12" 00:10:15.847 ], 00:10:15.847 "product_name": "Malloc disk", 00:10:15.847 "block_size": 512, 00:10:15.847 "num_blocks": 65536, 00:10:15.847 "uuid": "3ec173d7-aa08-4e4c-835a-0c630baf3f12", 00:10:15.847 "assigned_rate_limits": { 00:10:15.847 "rw_ios_per_sec": 0, 00:10:15.847 "rw_mbytes_per_sec": 0, 00:10:15.847 "r_mbytes_per_sec": 0, 00:10:15.847 "w_mbytes_per_sec": 0 00:10:15.847 }, 00:10:15.847 "claimed": true, 00:10:15.847 "claim_type": "exclusive_write", 00:10:15.847 "zoned": false, 00:10:15.847 "supported_io_types": { 00:10:15.847 "read": true, 00:10:15.847 "write": true, 00:10:15.847 "unmap": true, 00:10:16.106 "flush": true, 00:10:16.106 "reset": true, 00:10:16.106 "nvme_admin": false, 00:10:16.106 "nvme_io": false, 00:10:16.106 "nvme_io_md": false, 00:10:16.106 "write_zeroes": true, 00:10:16.106 "zcopy": true, 00:10:16.106 "get_zone_info": false, 00:10:16.106 "zone_management": false, 00:10:16.106 "zone_append": false, 00:10:16.106 "compare": false, 00:10:16.107 "compare_and_write": false, 00:10:16.107 "abort": true, 00:10:16.107 "seek_hole": false, 00:10:16.107 "seek_data": false, 00:10:16.107 "copy": true, 00:10:16.107 "nvme_iov_md": false 00:10:16.107 }, 00:10:16.107 "memory_domains": [ 00:10:16.107 { 00:10:16.107 "dma_device_id": "system", 00:10:16.107 "dma_device_type": 1 00:10:16.107 }, 00:10:16.107 { 00:10:16.107 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:16.107 "dma_device_type": 2 00:10:16.107 } 00:10:16.107 ], 00:10:16.107 "driver_specific": {} 00:10:16.107 } 00:10:16.107 ] 00:10:16.107 21:37:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.107 21:37:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:16.107 21:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:16.107 21:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:16.107 21:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:10:16.107 21:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:16.107 21:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:16.107 21:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:16.107 21:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:16.107 21:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:16.107 21:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:16.107 21:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:16.107 21:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:16.107 21:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:16.107 21:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:16.107 21:37:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.107 21:37:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.107 21:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:16.107 21:37:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.107 21:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:16.107 "name": "Existed_Raid", 00:10:16.107 "uuid": "babc426b-1831-4066-b83f-74bb138b1156", 00:10:16.107 "strip_size_kb": 64, 00:10:16.107 "state": "online", 00:10:16.107 "raid_level": "raid0", 00:10:16.107 "superblock": true, 00:10:16.107 "num_base_bdevs": 3, 00:10:16.107 "num_base_bdevs_discovered": 3, 00:10:16.107 "num_base_bdevs_operational": 3, 00:10:16.107 "base_bdevs_list": [ 00:10:16.107 { 00:10:16.107 "name": "BaseBdev1", 00:10:16.107 "uuid": "51d0763a-2504-40f5-8917-063eb75cb9a4", 00:10:16.107 "is_configured": true, 00:10:16.107 "data_offset": 2048, 00:10:16.107 "data_size": 63488 00:10:16.107 }, 00:10:16.107 { 00:10:16.107 "name": "BaseBdev2", 00:10:16.107 "uuid": "4f596387-b626-4220-8595-a81aa4c69e4d", 00:10:16.107 "is_configured": true, 00:10:16.107 "data_offset": 2048, 00:10:16.107 "data_size": 63488 00:10:16.107 }, 00:10:16.107 { 00:10:16.107 "name": "BaseBdev3", 00:10:16.107 "uuid": "3ec173d7-aa08-4e4c-835a-0c630baf3f12", 00:10:16.107 "is_configured": true, 00:10:16.107 "data_offset": 2048, 00:10:16.107 "data_size": 63488 00:10:16.107 } 00:10:16.107 ] 00:10:16.107 }' 00:10:16.107 21:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:16.107 21:37:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.365 21:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:16.365 21:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:16.365 21:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:16.365 21:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:16.365 21:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:16.365 21:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:16.365 21:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:16.365 21:37:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.365 21:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:16.365 21:37:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.365 [2024-12-10 21:37:17.110000] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:16.365 21:37:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.625 21:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:16.625 "name": "Existed_Raid", 00:10:16.625 "aliases": [ 00:10:16.625 "babc426b-1831-4066-b83f-74bb138b1156" 00:10:16.625 ], 00:10:16.625 "product_name": "Raid Volume", 00:10:16.625 "block_size": 512, 00:10:16.625 "num_blocks": 190464, 00:10:16.625 "uuid": "babc426b-1831-4066-b83f-74bb138b1156", 00:10:16.625 "assigned_rate_limits": { 00:10:16.625 "rw_ios_per_sec": 0, 00:10:16.625 "rw_mbytes_per_sec": 0, 00:10:16.625 "r_mbytes_per_sec": 0, 00:10:16.625 "w_mbytes_per_sec": 0 00:10:16.625 }, 00:10:16.625 "claimed": false, 00:10:16.625 "zoned": false, 00:10:16.625 "supported_io_types": { 00:10:16.625 "read": true, 00:10:16.625 "write": true, 00:10:16.625 "unmap": true, 00:10:16.625 "flush": true, 00:10:16.625 "reset": true, 00:10:16.625 "nvme_admin": false, 00:10:16.625 "nvme_io": false, 00:10:16.625 "nvme_io_md": false, 00:10:16.625 "write_zeroes": true, 00:10:16.625 "zcopy": false, 00:10:16.625 "get_zone_info": false, 00:10:16.625 "zone_management": false, 00:10:16.625 "zone_append": false, 00:10:16.625 "compare": false, 00:10:16.625 "compare_and_write": false, 00:10:16.625 "abort": false, 00:10:16.625 "seek_hole": false, 00:10:16.625 "seek_data": false, 00:10:16.625 "copy": false, 00:10:16.625 "nvme_iov_md": false 00:10:16.625 }, 00:10:16.625 "memory_domains": [ 00:10:16.626 { 00:10:16.626 "dma_device_id": "system", 00:10:16.626 "dma_device_type": 1 00:10:16.626 }, 00:10:16.626 { 00:10:16.626 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:16.626 "dma_device_type": 2 00:10:16.626 }, 00:10:16.626 { 00:10:16.626 "dma_device_id": "system", 00:10:16.626 "dma_device_type": 1 00:10:16.626 }, 00:10:16.626 { 00:10:16.626 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:16.626 "dma_device_type": 2 00:10:16.626 }, 00:10:16.626 { 00:10:16.626 "dma_device_id": "system", 00:10:16.626 "dma_device_type": 1 00:10:16.626 }, 00:10:16.626 { 00:10:16.626 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:16.626 "dma_device_type": 2 00:10:16.626 } 00:10:16.626 ], 00:10:16.626 "driver_specific": { 00:10:16.626 "raid": { 00:10:16.626 "uuid": "babc426b-1831-4066-b83f-74bb138b1156", 00:10:16.626 "strip_size_kb": 64, 00:10:16.626 "state": "online", 00:10:16.626 "raid_level": "raid0", 00:10:16.626 "superblock": true, 00:10:16.626 "num_base_bdevs": 3, 00:10:16.626 "num_base_bdevs_discovered": 3, 00:10:16.626 "num_base_bdevs_operational": 3, 00:10:16.626 "base_bdevs_list": [ 00:10:16.626 { 00:10:16.626 "name": "BaseBdev1", 00:10:16.626 "uuid": "51d0763a-2504-40f5-8917-063eb75cb9a4", 00:10:16.626 "is_configured": true, 00:10:16.626 "data_offset": 2048, 00:10:16.626 "data_size": 63488 00:10:16.626 }, 00:10:16.626 { 00:10:16.626 "name": "BaseBdev2", 00:10:16.626 "uuid": "4f596387-b626-4220-8595-a81aa4c69e4d", 00:10:16.626 "is_configured": true, 00:10:16.626 "data_offset": 2048, 00:10:16.626 "data_size": 63488 00:10:16.626 }, 00:10:16.626 { 00:10:16.626 "name": "BaseBdev3", 00:10:16.626 "uuid": "3ec173d7-aa08-4e4c-835a-0c630baf3f12", 00:10:16.626 "is_configured": true, 00:10:16.626 "data_offset": 2048, 00:10:16.626 "data_size": 63488 00:10:16.626 } 00:10:16.626 ] 00:10:16.626 } 00:10:16.626 } 00:10:16.626 }' 00:10:16.626 21:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:16.626 21:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:16.626 BaseBdev2 00:10:16.626 BaseBdev3' 00:10:16.626 21:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:16.626 21:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:16.626 21:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:16.626 21:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:16.626 21:37:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.626 21:37:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.626 21:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:16.626 21:37:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.626 21:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:16.626 21:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:16.626 21:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:16.626 21:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:16.626 21:37:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.626 21:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:16.626 21:37:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.626 21:37:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.626 21:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:16.626 21:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:16.626 21:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:16.626 21:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:16.626 21:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:16.626 21:37:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.626 21:37:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.626 21:37:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.626 21:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:16.626 21:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:16.626 21:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:16.626 21:37:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.626 21:37:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.626 [2024-12-10 21:37:17.373282] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:16.626 [2024-12-10 21:37:17.373386] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:16.626 [2024-12-10 21:37:17.373473] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:16.887 21:37:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.887 21:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:16.887 21:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:10:16.887 21:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:16.887 21:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:10:16.887 21:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:16.887 21:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:10:16.887 21:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:16.887 21:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:16.887 21:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:16.887 21:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:16.887 21:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:16.887 21:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:16.887 21:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:16.887 21:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:16.887 21:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:16.887 21:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:16.887 21:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:16.887 21:37:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.887 21:37:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.887 21:37:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.887 21:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:16.887 "name": "Existed_Raid", 00:10:16.887 "uuid": "babc426b-1831-4066-b83f-74bb138b1156", 00:10:16.887 "strip_size_kb": 64, 00:10:16.887 "state": "offline", 00:10:16.887 "raid_level": "raid0", 00:10:16.887 "superblock": true, 00:10:16.887 "num_base_bdevs": 3, 00:10:16.887 "num_base_bdevs_discovered": 2, 00:10:16.887 "num_base_bdevs_operational": 2, 00:10:16.887 "base_bdevs_list": [ 00:10:16.887 { 00:10:16.887 "name": null, 00:10:16.887 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:16.887 "is_configured": false, 00:10:16.887 "data_offset": 0, 00:10:16.887 "data_size": 63488 00:10:16.887 }, 00:10:16.887 { 00:10:16.887 "name": "BaseBdev2", 00:10:16.887 "uuid": "4f596387-b626-4220-8595-a81aa4c69e4d", 00:10:16.887 "is_configured": true, 00:10:16.887 "data_offset": 2048, 00:10:16.887 "data_size": 63488 00:10:16.887 }, 00:10:16.887 { 00:10:16.887 "name": "BaseBdev3", 00:10:16.887 "uuid": "3ec173d7-aa08-4e4c-835a-0c630baf3f12", 00:10:16.887 "is_configured": true, 00:10:16.887 "data_offset": 2048, 00:10:16.887 "data_size": 63488 00:10:16.887 } 00:10:16.887 ] 00:10:16.887 }' 00:10:16.887 21:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:16.887 21:37:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.455 21:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:17.455 21:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:17.455 21:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:17.455 21:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:17.455 21:37:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.455 21:37:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.455 21:37:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.455 21:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:17.455 21:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:17.455 21:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:17.455 21:37:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.455 21:37:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.455 [2024-12-10 21:37:17.990536] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:17.455 21:37:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.455 21:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:17.455 21:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:17.455 21:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:17.455 21:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:17.455 21:37:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.455 21:37:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.455 21:37:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.455 21:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:17.455 21:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:17.455 21:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:17.455 21:37:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.455 21:37:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.455 [2024-12-10 21:37:18.162830] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:17.455 [2024-12-10 21:37:18.162952] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:17.715 21:37:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.715 21:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:17.715 21:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:17.715 21:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:17.715 21:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:17.715 21:37:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.715 21:37:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.715 21:37:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.715 21:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:17.715 21:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:17.715 21:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:10:17.715 21:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:17.715 21:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:17.715 21:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:17.715 21:37:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.715 21:37:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.715 BaseBdev2 00:10:17.715 21:37:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.715 21:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:17.715 21:37:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:17.715 21:37:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:17.715 21:37:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:17.715 21:37:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:17.715 21:37:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:17.715 21:37:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:17.715 21:37:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.715 21:37:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.715 21:37:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.715 21:37:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:17.715 21:37:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.715 21:37:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.715 [ 00:10:17.715 { 00:10:17.715 "name": "BaseBdev2", 00:10:17.715 "aliases": [ 00:10:17.715 "89b80cbc-1e64-48fb-86c8-0b33a69a90ff" 00:10:17.715 ], 00:10:17.715 "product_name": "Malloc disk", 00:10:17.715 "block_size": 512, 00:10:17.715 "num_blocks": 65536, 00:10:17.715 "uuid": "89b80cbc-1e64-48fb-86c8-0b33a69a90ff", 00:10:17.715 "assigned_rate_limits": { 00:10:17.715 "rw_ios_per_sec": 0, 00:10:17.715 "rw_mbytes_per_sec": 0, 00:10:17.715 "r_mbytes_per_sec": 0, 00:10:17.715 "w_mbytes_per_sec": 0 00:10:17.715 }, 00:10:17.715 "claimed": false, 00:10:17.715 "zoned": false, 00:10:17.715 "supported_io_types": { 00:10:17.715 "read": true, 00:10:17.715 "write": true, 00:10:17.715 "unmap": true, 00:10:17.715 "flush": true, 00:10:17.715 "reset": true, 00:10:17.715 "nvme_admin": false, 00:10:17.715 "nvme_io": false, 00:10:17.715 "nvme_io_md": false, 00:10:17.715 "write_zeroes": true, 00:10:17.715 "zcopy": true, 00:10:17.715 "get_zone_info": false, 00:10:17.715 "zone_management": false, 00:10:17.715 "zone_append": false, 00:10:17.715 "compare": false, 00:10:17.715 "compare_and_write": false, 00:10:17.715 "abort": true, 00:10:17.715 "seek_hole": false, 00:10:17.715 "seek_data": false, 00:10:17.715 "copy": true, 00:10:17.715 "nvme_iov_md": false 00:10:17.715 }, 00:10:17.715 "memory_domains": [ 00:10:17.715 { 00:10:17.715 "dma_device_id": "system", 00:10:17.715 "dma_device_type": 1 00:10:17.715 }, 00:10:17.715 { 00:10:17.715 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:17.715 "dma_device_type": 2 00:10:17.715 } 00:10:17.715 ], 00:10:17.715 "driver_specific": {} 00:10:17.715 } 00:10:17.715 ] 00:10:17.715 21:37:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.715 21:37:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:17.715 21:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:17.715 21:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:17.715 21:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:17.715 21:37:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.715 21:37:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.715 BaseBdev3 00:10:17.716 21:37:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.716 21:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:17.716 21:37:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:17.716 21:37:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:17.716 21:37:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:17.716 21:37:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:17.716 21:37:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:17.716 21:37:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:17.716 21:37:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.716 21:37:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.716 21:37:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.716 21:37:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:17.716 21:37:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.716 21:37:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.975 [ 00:10:17.975 { 00:10:17.975 "name": "BaseBdev3", 00:10:17.975 "aliases": [ 00:10:17.975 "0295ea1f-c129-4e63-a6c6-d82cd625375d" 00:10:17.975 ], 00:10:17.975 "product_name": "Malloc disk", 00:10:17.975 "block_size": 512, 00:10:17.975 "num_blocks": 65536, 00:10:17.975 "uuid": "0295ea1f-c129-4e63-a6c6-d82cd625375d", 00:10:17.975 "assigned_rate_limits": { 00:10:17.975 "rw_ios_per_sec": 0, 00:10:17.975 "rw_mbytes_per_sec": 0, 00:10:17.975 "r_mbytes_per_sec": 0, 00:10:17.975 "w_mbytes_per_sec": 0 00:10:17.975 }, 00:10:17.975 "claimed": false, 00:10:17.975 "zoned": false, 00:10:17.975 "supported_io_types": { 00:10:17.975 "read": true, 00:10:17.975 "write": true, 00:10:17.975 "unmap": true, 00:10:17.975 "flush": true, 00:10:17.975 "reset": true, 00:10:17.975 "nvme_admin": false, 00:10:17.975 "nvme_io": false, 00:10:17.975 "nvme_io_md": false, 00:10:17.975 "write_zeroes": true, 00:10:17.975 "zcopy": true, 00:10:17.975 "get_zone_info": false, 00:10:17.975 "zone_management": false, 00:10:17.975 "zone_append": false, 00:10:17.975 "compare": false, 00:10:17.975 "compare_and_write": false, 00:10:17.975 "abort": true, 00:10:17.975 "seek_hole": false, 00:10:17.975 "seek_data": false, 00:10:17.975 "copy": true, 00:10:17.975 "nvme_iov_md": false 00:10:17.975 }, 00:10:17.975 "memory_domains": [ 00:10:17.975 { 00:10:17.975 "dma_device_id": "system", 00:10:17.975 "dma_device_type": 1 00:10:17.975 }, 00:10:17.975 { 00:10:17.975 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:17.975 "dma_device_type": 2 00:10:17.975 } 00:10:17.975 ], 00:10:17.975 "driver_specific": {} 00:10:17.975 } 00:10:17.975 ] 00:10:17.975 21:37:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.975 21:37:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:17.975 21:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:17.975 21:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:17.975 21:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:17.975 21:37:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.975 21:37:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.975 [2024-12-10 21:37:18.519975] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:17.975 [2024-12-10 21:37:18.520030] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:17.975 [2024-12-10 21:37:18.520060] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:17.975 [2024-12-10 21:37:18.522147] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:17.975 21:37:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.975 21:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:17.975 21:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:17.976 21:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:17.976 21:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:17.976 21:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:17.976 21:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:17.976 21:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:17.976 21:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:17.976 21:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:17.976 21:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:17.976 21:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:17.976 21:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:17.976 21:37:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.976 21:37:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.976 21:37:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.976 21:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:17.976 "name": "Existed_Raid", 00:10:17.976 "uuid": "3530d79b-966b-4033-89a9-62c8487c4931", 00:10:17.976 "strip_size_kb": 64, 00:10:17.976 "state": "configuring", 00:10:17.976 "raid_level": "raid0", 00:10:17.976 "superblock": true, 00:10:17.976 "num_base_bdevs": 3, 00:10:17.976 "num_base_bdevs_discovered": 2, 00:10:17.976 "num_base_bdevs_operational": 3, 00:10:17.976 "base_bdevs_list": [ 00:10:17.976 { 00:10:17.976 "name": "BaseBdev1", 00:10:17.976 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:17.976 "is_configured": false, 00:10:17.976 "data_offset": 0, 00:10:17.976 "data_size": 0 00:10:17.976 }, 00:10:17.976 { 00:10:17.976 "name": "BaseBdev2", 00:10:17.976 "uuid": "89b80cbc-1e64-48fb-86c8-0b33a69a90ff", 00:10:17.976 "is_configured": true, 00:10:17.976 "data_offset": 2048, 00:10:17.976 "data_size": 63488 00:10:17.976 }, 00:10:17.976 { 00:10:17.976 "name": "BaseBdev3", 00:10:17.976 "uuid": "0295ea1f-c129-4e63-a6c6-d82cd625375d", 00:10:17.976 "is_configured": true, 00:10:17.976 "data_offset": 2048, 00:10:17.976 "data_size": 63488 00:10:17.976 } 00:10:17.976 ] 00:10:17.976 }' 00:10:17.976 21:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:17.976 21:37:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.235 21:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:18.235 21:37:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.235 21:37:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.235 [2024-12-10 21:37:19.003495] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:18.235 21:37:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.235 21:37:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:18.235 21:37:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:18.235 21:37:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:18.235 21:37:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:18.235 21:37:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:18.235 21:37:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:18.235 21:37:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:18.235 21:37:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:18.235 21:37:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:18.235 21:37:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:18.235 21:37:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:18.235 21:37:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.235 21:37:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.235 21:37:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:18.494 21:37:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.494 21:37:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:18.494 "name": "Existed_Raid", 00:10:18.494 "uuid": "3530d79b-966b-4033-89a9-62c8487c4931", 00:10:18.494 "strip_size_kb": 64, 00:10:18.494 "state": "configuring", 00:10:18.494 "raid_level": "raid0", 00:10:18.494 "superblock": true, 00:10:18.494 "num_base_bdevs": 3, 00:10:18.494 "num_base_bdevs_discovered": 1, 00:10:18.494 "num_base_bdevs_operational": 3, 00:10:18.494 "base_bdevs_list": [ 00:10:18.494 { 00:10:18.494 "name": "BaseBdev1", 00:10:18.494 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:18.494 "is_configured": false, 00:10:18.494 "data_offset": 0, 00:10:18.494 "data_size": 0 00:10:18.494 }, 00:10:18.494 { 00:10:18.494 "name": null, 00:10:18.494 "uuid": "89b80cbc-1e64-48fb-86c8-0b33a69a90ff", 00:10:18.494 "is_configured": false, 00:10:18.494 "data_offset": 0, 00:10:18.494 "data_size": 63488 00:10:18.494 }, 00:10:18.494 { 00:10:18.494 "name": "BaseBdev3", 00:10:18.494 "uuid": "0295ea1f-c129-4e63-a6c6-d82cd625375d", 00:10:18.494 "is_configured": true, 00:10:18.494 "data_offset": 2048, 00:10:18.494 "data_size": 63488 00:10:18.494 } 00:10:18.494 ] 00:10:18.494 }' 00:10:18.494 21:37:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:18.494 21:37:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.752 21:37:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:18.752 21:37:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.752 21:37:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:18.752 21:37:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.752 21:37:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.752 21:37:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:18.752 21:37:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:18.752 21:37:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.752 21:37:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:19.011 [2024-12-10 21:37:19.565536] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:19.011 BaseBdev1 00:10:19.011 21:37:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.011 21:37:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:19.011 21:37:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:19.011 21:37:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:19.011 21:37:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:19.011 21:37:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:19.011 21:37:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:19.011 21:37:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:19.011 21:37:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.012 21:37:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:19.012 21:37:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.012 21:37:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:19.012 21:37:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.012 21:37:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:19.012 [ 00:10:19.012 { 00:10:19.012 "name": "BaseBdev1", 00:10:19.012 "aliases": [ 00:10:19.012 "0229c8b7-a4ba-4174-88c1-b5aa0486b390" 00:10:19.012 ], 00:10:19.012 "product_name": "Malloc disk", 00:10:19.012 "block_size": 512, 00:10:19.012 "num_blocks": 65536, 00:10:19.012 "uuid": "0229c8b7-a4ba-4174-88c1-b5aa0486b390", 00:10:19.012 "assigned_rate_limits": { 00:10:19.012 "rw_ios_per_sec": 0, 00:10:19.012 "rw_mbytes_per_sec": 0, 00:10:19.012 "r_mbytes_per_sec": 0, 00:10:19.012 "w_mbytes_per_sec": 0 00:10:19.012 }, 00:10:19.012 "claimed": true, 00:10:19.012 "claim_type": "exclusive_write", 00:10:19.012 "zoned": false, 00:10:19.012 "supported_io_types": { 00:10:19.012 "read": true, 00:10:19.012 "write": true, 00:10:19.012 "unmap": true, 00:10:19.012 "flush": true, 00:10:19.012 "reset": true, 00:10:19.012 "nvme_admin": false, 00:10:19.012 "nvme_io": false, 00:10:19.012 "nvme_io_md": false, 00:10:19.012 "write_zeroes": true, 00:10:19.012 "zcopy": true, 00:10:19.012 "get_zone_info": false, 00:10:19.012 "zone_management": false, 00:10:19.012 "zone_append": false, 00:10:19.012 "compare": false, 00:10:19.012 "compare_and_write": false, 00:10:19.012 "abort": true, 00:10:19.012 "seek_hole": false, 00:10:19.012 "seek_data": false, 00:10:19.012 "copy": true, 00:10:19.012 "nvme_iov_md": false 00:10:19.012 }, 00:10:19.012 "memory_domains": [ 00:10:19.012 { 00:10:19.012 "dma_device_id": "system", 00:10:19.012 "dma_device_type": 1 00:10:19.012 }, 00:10:19.012 { 00:10:19.012 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:19.012 "dma_device_type": 2 00:10:19.012 } 00:10:19.012 ], 00:10:19.012 "driver_specific": {} 00:10:19.012 } 00:10:19.012 ] 00:10:19.012 21:37:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.012 21:37:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:19.012 21:37:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:19.012 21:37:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:19.012 21:37:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:19.012 21:37:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:19.012 21:37:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:19.012 21:37:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:19.012 21:37:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:19.012 21:37:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:19.012 21:37:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:19.012 21:37:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:19.012 21:37:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:19.012 21:37:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:19.012 21:37:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.012 21:37:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:19.012 21:37:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.012 21:37:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:19.012 "name": "Existed_Raid", 00:10:19.012 "uuid": "3530d79b-966b-4033-89a9-62c8487c4931", 00:10:19.012 "strip_size_kb": 64, 00:10:19.012 "state": "configuring", 00:10:19.012 "raid_level": "raid0", 00:10:19.012 "superblock": true, 00:10:19.012 "num_base_bdevs": 3, 00:10:19.012 "num_base_bdevs_discovered": 2, 00:10:19.012 "num_base_bdevs_operational": 3, 00:10:19.012 "base_bdevs_list": [ 00:10:19.012 { 00:10:19.012 "name": "BaseBdev1", 00:10:19.012 "uuid": "0229c8b7-a4ba-4174-88c1-b5aa0486b390", 00:10:19.012 "is_configured": true, 00:10:19.012 "data_offset": 2048, 00:10:19.012 "data_size": 63488 00:10:19.012 }, 00:10:19.012 { 00:10:19.012 "name": null, 00:10:19.012 "uuid": "89b80cbc-1e64-48fb-86c8-0b33a69a90ff", 00:10:19.012 "is_configured": false, 00:10:19.012 "data_offset": 0, 00:10:19.012 "data_size": 63488 00:10:19.012 }, 00:10:19.012 { 00:10:19.012 "name": "BaseBdev3", 00:10:19.012 "uuid": "0295ea1f-c129-4e63-a6c6-d82cd625375d", 00:10:19.012 "is_configured": true, 00:10:19.012 "data_offset": 2048, 00:10:19.012 "data_size": 63488 00:10:19.012 } 00:10:19.012 ] 00:10:19.012 }' 00:10:19.012 21:37:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:19.012 21:37:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:19.581 21:37:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:19.581 21:37:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:19.581 21:37:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.581 21:37:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:19.581 21:37:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.581 21:37:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:19.581 21:37:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:19.581 21:37:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.581 21:37:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:19.581 [2024-12-10 21:37:20.104729] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:19.581 21:37:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.581 21:37:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:19.581 21:37:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:19.581 21:37:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:19.581 21:37:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:19.581 21:37:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:19.581 21:37:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:19.581 21:37:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:19.581 21:37:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:19.581 21:37:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:19.581 21:37:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:19.581 21:37:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:19.581 21:37:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:19.581 21:37:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.581 21:37:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:19.581 21:37:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.581 21:37:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:19.581 "name": "Existed_Raid", 00:10:19.581 "uuid": "3530d79b-966b-4033-89a9-62c8487c4931", 00:10:19.581 "strip_size_kb": 64, 00:10:19.581 "state": "configuring", 00:10:19.581 "raid_level": "raid0", 00:10:19.581 "superblock": true, 00:10:19.581 "num_base_bdevs": 3, 00:10:19.581 "num_base_bdevs_discovered": 1, 00:10:19.581 "num_base_bdevs_operational": 3, 00:10:19.581 "base_bdevs_list": [ 00:10:19.581 { 00:10:19.581 "name": "BaseBdev1", 00:10:19.581 "uuid": "0229c8b7-a4ba-4174-88c1-b5aa0486b390", 00:10:19.581 "is_configured": true, 00:10:19.581 "data_offset": 2048, 00:10:19.581 "data_size": 63488 00:10:19.581 }, 00:10:19.581 { 00:10:19.581 "name": null, 00:10:19.581 "uuid": "89b80cbc-1e64-48fb-86c8-0b33a69a90ff", 00:10:19.581 "is_configured": false, 00:10:19.581 "data_offset": 0, 00:10:19.581 "data_size": 63488 00:10:19.581 }, 00:10:19.581 { 00:10:19.581 "name": null, 00:10:19.581 "uuid": "0295ea1f-c129-4e63-a6c6-d82cd625375d", 00:10:19.581 "is_configured": false, 00:10:19.581 "data_offset": 0, 00:10:19.581 "data_size": 63488 00:10:19.581 } 00:10:19.581 ] 00:10:19.581 }' 00:10:19.581 21:37:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:19.581 21:37:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:19.840 21:37:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:19.840 21:37:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.840 21:37:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:19.840 21:37:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:19.840 21:37:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.097 21:37:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:20.097 21:37:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:20.097 21:37:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.098 21:37:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:20.098 [2024-12-10 21:37:20.635930] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:20.098 21:37:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.098 21:37:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:20.098 21:37:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:20.098 21:37:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:20.098 21:37:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:20.098 21:37:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:20.098 21:37:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:20.098 21:37:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:20.098 21:37:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:20.098 21:37:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:20.098 21:37:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:20.098 21:37:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:20.098 21:37:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.098 21:37:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:20.098 21:37:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:20.098 21:37:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.098 21:37:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:20.098 "name": "Existed_Raid", 00:10:20.098 "uuid": "3530d79b-966b-4033-89a9-62c8487c4931", 00:10:20.098 "strip_size_kb": 64, 00:10:20.098 "state": "configuring", 00:10:20.098 "raid_level": "raid0", 00:10:20.098 "superblock": true, 00:10:20.098 "num_base_bdevs": 3, 00:10:20.098 "num_base_bdevs_discovered": 2, 00:10:20.098 "num_base_bdevs_operational": 3, 00:10:20.098 "base_bdevs_list": [ 00:10:20.098 { 00:10:20.098 "name": "BaseBdev1", 00:10:20.098 "uuid": "0229c8b7-a4ba-4174-88c1-b5aa0486b390", 00:10:20.098 "is_configured": true, 00:10:20.098 "data_offset": 2048, 00:10:20.098 "data_size": 63488 00:10:20.098 }, 00:10:20.098 { 00:10:20.098 "name": null, 00:10:20.098 "uuid": "89b80cbc-1e64-48fb-86c8-0b33a69a90ff", 00:10:20.098 "is_configured": false, 00:10:20.098 "data_offset": 0, 00:10:20.098 "data_size": 63488 00:10:20.098 }, 00:10:20.098 { 00:10:20.098 "name": "BaseBdev3", 00:10:20.098 "uuid": "0295ea1f-c129-4e63-a6c6-d82cd625375d", 00:10:20.098 "is_configured": true, 00:10:20.098 "data_offset": 2048, 00:10:20.098 "data_size": 63488 00:10:20.098 } 00:10:20.098 ] 00:10:20.098 }' 00:10:20.098 21:37:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:20.098 21:37:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:20.665 21:37:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:20.665 21:37:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:20.665 21:37:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.665 21:37:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:20.665 21:37:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.665 21:37:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:20.665 21:37:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:20.665 21:37:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.665 21:37:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:20.665 [2024-12-10 21:37:21.191229] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:20.665 21:37:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.665 21:37:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:20.665 21:37:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:20.665 21:37:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:20.665 21:37:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:20.665 21:37:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:20.665 21:37:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:20.665 21:37:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:20.665 21:37:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:20.665 21:37:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:20.665 21:37:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:20.665 21:37:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:20.665 21:37:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:20.665 21:37:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.665 21:37:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:20.665 21:37:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.665 21:37:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:20.665 "name": "Existed_Raid", 00:10:20.665 "uuid": "3530d79b-966b-4033-89a9-62c8487c4931", 00:10:20.665 "strip_size_kb": 64, 00:10:20.665 "state": "configuring", 00:10:20.665 "raid_level": "raid0", 00:10:20.665 "superblock": true, 00:10:20.665 "num_base_bdevs": 3, 00:10:20.665 "num_base_bdevs_discovered": 1, 00:10:20.665 "num_base_bdevs_operational": 3, 00:10:20.665 "base_bdevs_list": [ 00:10:20.665 { 00:10:20.665 "name": null, 00:10:20.665 "uuid": "0229c8b7-a4ba-4174-88c1-b5aa0486b390", 00:10:20.665 "is_configured": false, 00:10:20.665 "data_offset": 0, 00:10:20.665 "data_size": 63488 00:10:20.665 }, 00:10:20.665 { 00:10:20.665 "name": null, 00:10:20.665 "uuid": "89b80cbc-1e64-48fb-86c8-0b33a69a90ff", 00:10:20.665 "is_configured": false, 00:10:20.665 "data_offset": 0, 00:10:20.665 "data_size": 63488 00:10:20.665 }, 00:10:20.665 { 00:10:20.665 "name": "BaseBdev3", 00:10:20.665 "uuid": "0295ea1f-c129-4e63-a6c6-d82cd625375d", 00:10:20.665 "is_configured": true, 00:10:20.665 "data_offset": 2048, 00:10:20.665 "data_size": 63488 00:10:20.665 } 00:10:20.665 ] 00:10:20.665 }' 00:10:20.665 21:37:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:20.666 21:37:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:21.233 21:37:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:21.233 21:37:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:21.233 21:37:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.233 21:37:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:21.233 21:37:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.233 21:37:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:21.233 21:37:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:21.233 21:37:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.233 21:37:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:21.233 [2024-12-10 21:37:21.811074] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:21.233 21:37:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.233 21:37:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:21.233 21:37:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:21.233 21:37:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:21.233 21:37:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:21.233 21:37:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:21.233 21:37:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:21.233 21:37:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:21.233 21:37:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:21.233 21:37:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:21.233 21:37:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:21.233 21:37:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:21.233 21:37:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:21.233 21:37:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.233 21:37:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:21.233 21:37:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.233 21:37:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:21.233 "name": "Existed_Raid", 00:10:21.233 "uuid": "3530d79b-966b-4033-89a9-62c8487c4931", 00:10:21.233 "strip_size_kb": 64, 00:10:21.233 "state": "configuring", 00:10:21.233 "raid_level": "raid0", 00:10:21.233 "superblock": true, 00:10:21.233 "num_base_bdevs": 3, 00:10:21.233 "num_base_bdevs_discovered": 2, 00:10:21.233 "num_base_bdevs_operational": 3, 00:10:21.233 "base_bdevs_list": [ 00:10:21.233 { 00:10:21.233 "name": null, 00:10:21.233 "uuid": "0229c8b7-a4ba-4174-88c1-b5aa0486b390", 00:10:21.233 "is_configured": false, 00:10:21.233 "data_offset": 0, 00:10:21.233 "data_size": 63488 00:10:21.233 }, 00:10:21.234 { 00:10:21.234 "name": "BaseBdev2", 00:10:21.234 "uuid": "89b80cbc-1e64-48fb-86c8-0b33a69a90ff", 00:10:21.234 "is_configured": true, 00:10:21.234 "data_offset": 2048, 00:10:21.234 "data_size": 63488 00:10:21.234 }, 00:10:21.234 { 00:10:21.234 "name": "BaseBdev3", 00:10:21.234 "uuid": "0295ea1f-c129-4e63-a6c6-d82cd625375d", 00:10:21.234 "is_configured": true, 00:10:21.234 "data_offset": 2048, 00:10:21.234 "data_size": 63488 00:10:21.234 } 00:10:21.234 ] 00:10:21.234 }' 00:10:21.234 21:37:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:21.234 21:37:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:21.492 21:37:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:21.492 21:37:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:21.492 21:37:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.492 21:37:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:21.751 21:37:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.751 21:37:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:21.751 21:37:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:21.751 21:37:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.751 21:37:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:21.751 21:37:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:21.751 21:37:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.751 21:37:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 0229c8b7-a4ba-4174-88c1-b5aa0486b390 00:10:21.752 21:37:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.752 21:37:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:21.752 [2024-12-10 21:37:22.411509] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:21.752 [2024-12-10 21:37:22.411772] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:21.752 [2024-12-10 21:37:22.411790] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:21.752 [2024-12-10 21:37:22.412055] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:10:21.752 NewBaseBdev 00:10:21.752 [2024-12-10 21:37:22.412218] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:21.752 [2024-12-10 21:37:22.412229] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:10:21.752 [2024-12-10 21:37:22.412378] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:21.752 21:37:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.752 21:37:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:21.752 21:37:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:10:21.752 21:37:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:21.752 21:37:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:21.752 21:37:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:21.752 21:37:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:21.752 21:37:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:21.752 21:37:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.752 21:37:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:21.752 21:37:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.752 21:37:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:21.752 21:37:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.752 21:37:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:21.752 [ 00:10:21.752 { 00:10:21.752 "name": "NewBaseBdev", 00:10:21.752 "aliases": [ 00:10:21.752 "0229c8b7-a4ba-4174-88c1-b5aa0486b390" 00:10:21.752 ], 00:10:21.752 "product_name": "Malloc disk", 00:10:21.752 "block_size": 512, 00:10:21.752 "num_blocks": 65536, 00:10:21.752 "uuid": "0229c8b7-a4ba-4174-88c1-b5aa0486b390", 00:10:21.752 "assigned_rate_limits": { 00:10:21.752 "rw_ios_per_sec": 0, 00:10:21.752 "rw_mbytes_per_sec": 0, 00:10:21.752 "r_mbytes_per_sec": 0, 00:10:21.752 "w_mbytes_per_sec": 0 00:10:21.752 }, 00:10:21.752 "claimed": true, 00:10:21.752 "claim_type": "exclusive_write", 00:10:21.752 "zoned": false, 00:10:21.752 "supported_io_types": { 00:10:21.752 "read": true, 00:10:21.752 "write": true, 00:10:21.752 "unmap": true, 00:10:21.752 "flush": true, 00:10:21.752 "reset": true, 00:10:21.752 "nvme_admin": false, 00:10:21.752 "nvme_io": false, 00:10:21.752 "nvme_io_md": false, 00:10:21.752 "write_zeroes": true, 00:10:21.752 "zcopy": true, 00:10:21.752 "get_zone_info": false, 00:10:21.752 "zone_management": false, 00:10:21.752 "zone_append": false, 00:10:21.752 "compare": false, 00:10:21.752 "compare_and_write": false, 00:10:21.752 "abort": true, 00:10:21.752 "seek_hole": false, 00:10:21.752 "seek_data": false, 00:10:21.752 "copy": true, 00:10:21.752 "nvme_iov_md": false 00:10:21.752 }, 00:10:21.752 "memory_domains": [ 00:10:21.752 { 00:10:21.752 "dma_device_id": "system", 00:10:21.752 "dma_device_type": 1 00:10:21.752 }, 00:10:21.752 { 00:10:21.752 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:21.752 "dma_device_type": 2 00:10:21.752 } 00:10:21.752 ], 00:10:21.752 "driver_specific": {} 00:10:21.752 } 00:10:21.752 ] 00:10:21.752 21:37:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.752 21:37:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:21.752 21:37:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:10:21.752 21:37:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:21.752 21:37:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:21.752 21:37:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:21.752 21:37:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:21.752 21:37:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:21.752 21:37:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:21.752 21:37:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:21.752 21:37:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:21.752 21:37:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:21.752 21:37:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:21.752 21:37:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:21.752 21:37:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.752 21:37:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:21.752 21:37:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.752 21:37:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:21.752 "name": "Existed_Raid", 00:10:21.752 "uuid": "3530d79b-966b-4033-89a9-62c8487c4931", 00:10:21.752 "strip_size_kb": 64, 00:10:21.752 "state": "online", 00:10:21.752 "raid_level": "raid0", 00:10:21.752 "superblock": true, 00:10:21.752 "num_base_bdevs": 3, 00:10:21.752 "num_base_bdevs_discovered": 3, 00:10:21.752 "num_base_bdevs_operational": 3, 00:10:21.752 "base_bdevs_list": [ 00:10:21.752 { 00:10:21.752 "name": "NewBaseBdev", 00:10:21.752 "uuid": "0229c8b7-a4ba-4174-88c1-b5aa0486b390", 00:10:21.752 "is_configured": true, 00:10:21.752 "data_offset": 2048, 00:10:21.752 "data_size": 63488 00:10:21.752 }, 00:10:21.752 { 00:10:21.752 "name": "BaseBdev2", 00:10:21.752 "uuid": "89b80cbc-1e64-48fb-86c8-0b33a69a90ff", 00:10:21.752 "is_configured": true, 00:10:21.752 "data_offset": 2048, 00:10:21.752 "data_size": 63488 00:10:21.752 }, 00:10:21.752 { 00:10:21.752 "name": "BaseBdev3", 00:10:21.752 "uuid": "0295ea1f-c129-4e63-a6c6-d82cd625375d", 00:10:21.752 "is_configured": true, 00:10:21.752 "data_offset": 2048, 00:10:21.752 "data_size": 63488 00:10:21.752 } 00:10:21.752 ] 00:10:21.752 }' 00:10:21.752 21:37:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:21.752 21:37:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:22.321 21:37:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:22.321 21:37:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:22.321 21:37:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:22.321 21:37:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:22.321 21:37:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:22.321 21:37:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:22.321 21:37:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:22.321 21:37:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:22.321 21:37:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.321 21:37:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:22.321 [2024-12-10 21:37:22.919079] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:22.321 21:37:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.321 21:37:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:22.321 "name": "Existed_Raid", 00:10:22.321 "aliases": [ 00:10:22.321 "3530d79b-966b-4033-89a9-62c8487c4931" 00:10:22.321 ], 00:10:22.321 "product_name": "Raid Volume", 00:10:22.321 "block_size": 512, 00:10:22.321 "num_blocks": 190464, 00:10:22.321 "uuid": "3530d79b-966b-4033-89a9-62c8487c4931", 00:10:22.321 "assigned_rate_limits": { 00:10:22.321 "rw_ios_per_sec": 0, 00:10:22.321 "rw_mbytes_per_sec": 0, 00:10:22.321 "r_mbytes_per_sec": 0, 00:10:22.321 "w_mbytes_per_sec": 0 00:10:22.321 }, 00:10:22.321 "claimed": false, 00:10:22.321 "zoned": false, 00:10:22.321 "supported_io_types": { 00:10:22.321 "read": true, 00:10:22.321 "write": true, 00:10:22.321 "unmap": true, 00:10:22.321 "flush": true, 00:10:22.321 "reset": true, 00:10:22.321 "nvme_admin": false, 00:10:22.321 "nvme_io": false, 00:10:22.322 "nvme_io_md": false, 00:10:22.322 "write_zeroes": true, 00:10:22.322 "zcopy": false, 00:10:22.322 "get_zone_info": false, 00:10:22.322 "zone_management": false, 00:10:22.322 "zone_append": false, 00:10:22.322 "compare": false, 00:10:22.322 "compare_and_write": false, 00:10:22.322 "abort": false, 00:10:22.322 "seek_hole": false, 00:10:22.322 "seek_data": false, 00:10:22.322 "copy": false, 00:10:22.322 "nvme_iov_md": false 00:10:22.322 }, 00:10:22.322 "memory_domains": [ 00:10:22.322 { 00:10:22.322 "dma_device_id": "system", 00:10:22.322 "dma_device_type": 1 00:10:22.322 }, 00:10:22.322 { 00:10:22.322 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:22.322 "dma_device_type": 2 00:10:22.322 }, 00:10:22.322 { 00:10:22.322 "dma_device_id": "system", 00:10:22.322 "dma_device_type": 1 00:10:22.322 }, 00:10:22.322 { 00:10:22.322 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:22.322 "dma_device_type": 2 00:10:22.322 }, 00:10:22.322 { 00:10:22.322 "dma_device_id": "system", 00:10:22.322 "dma_device_type": 1 00:10:22.322 }, 00:10:22.322 { 00:10:22.322 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:22.322 "dma_device_type": 2 00:10:22.322 } 00:10:22.322 ], 00:10:22.322 "driver_specific": { 00:10:22.322 "raid": { 00:10:22.322 "uuid": "3530d79b-966b-4033-89a9-62c8487c4931", 00:10:22.322 "strip_size_kb": 64, 00:10:22.322 "state": "online", 00:10:22.322 "raid_level": "raid0", 00:10:22.322 "superblock": true, 00:10:22.322 "num_base_bdevs": 3, 00:10:22.322 "num_base_bdevs_discovered": 3, 00:10:22.322 "num_base_bdevs_operational": 3, 00:10:22.322 "base_bdevs_list": [ 00:10:22.322 { 00:10:22.322 "name": "NewBaseBdev", 00:10:22.322 "uuid": "0229c8b7-a4ba-4174-88c1-b5aa0486b390", 00:10:22.322 "is_configured": true, 00:10:22.322 "data_offset": 2048, 00:10:22.322 "data_size": 63488 00:10:22.322 }, 00:10:22.322 { 00:10:22.322 "name": "BaseBdev2", 00:10:22.322 "uuid": "89b80cbc-1e64-48fb-86c8-0b33a69a90ff", 00:10:22.322 "is_configured": true, 00:10:22.322 "data_offset": 2048, 00:10:22.322 "data_size": 63488 00:10:22.322 }, 00:10:22.322 { 00:10:22.322 "name": "BaseBdev3", 00:10:22.322 "uuid": "0295ea1f-c129-4e63-a6c6-d82cd625375d", 00:10:22.322 "is_configured": true, 00:10:22.322 "data_offset": 2048, 00:10:22.322 "data_size": 63488 00:10:22.322 } 00:10:22.322 ] 00:10:22.322 } 00:10:22.322 } 00:10:22.322 }' 00:10:22.322 21:37:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:22.322 21:37:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:22.322 BaseBdev2 00:10:22.322 BaseBdev3' 00:10:22.322 21:37:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:22.322 21:37:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:22.322 21:37:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:22.322 21:37:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:22.322 21:37:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.322 21:37:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:22.322 21:37:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:22.322 21:37:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.322 21:37:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:22.322 21:37:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:22.322 21:37:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:22.322 21:37:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:22.322 21:37:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.322 21:37:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:22.322 21:37:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:22.322 21:37:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.581 21:37:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:22.581 21:37:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:22.581 21:37:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:22.581 21:37:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:22.581 21:37:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:22.581 21:37:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.581 21:37:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:22.581 21:37:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.581 21:37:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:22.581 21:37:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:22.581 21:37:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:22.581 21:37:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.581 21:37:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:22.581 [2024-12-10 21:37:23.178309] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:22.581 [2024-12-10 21:37:23.178340] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:22.581 [2024-12-10 21:37:23.178447] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:22.581 [2024-12-10 21:37:23.178509] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:22.581 [2024-12-10 21:37:23.178522] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:10:22.581 21:37:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.581 21:37:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 64540 00:10:22.581 21:37:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 64540 ']' 00:10:22.581 21:37:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 64540 00:10:22.581 21:37:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:10:22.581 21:37:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:22.581 21:37:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64540 00:10:22.581 21:37:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:22.581 21:37:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:22.581 21:37:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64540' 00:10:22.581 killing process with pid 64540 00:10:22.581 21:37:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 64540 00:10:22.581 [2024-12-10 21:37:23.223508] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:22.582 21:37:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 64540 00:10:22.841 [2024-12-10 21:37:23.579210] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:24.220 21:37:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:10:24.220 00:10:24.220 real 0m11.379s 00:10:24.220 user 0m18.043s 00:10:24.220 sys 0m1.868s 00:10:24.220 21:37:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:24.220 21:37:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:24.220 ************************************ 00:10:24.220 END TEST raid_state_function_test_sb 00:10:24.220 ************************************ 00:10:24.220 21:37:24 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 3 00:10:24.220 21:37:24 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:24.220 21:37:24 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:24.220 21:37:24 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:24.220 ************************************ 00:10:24.220 START TEST raid_superblock_test 00:10:24.220 ************************************ 00:10:24.220 21:37:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 3 00:10:24.220 21:37:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:10:24.220 21:37:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:10:24.220 21:37:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:10:24.220 21:37:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:10:24.220 21:37:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:10:24.220 21:37:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:10:24.220 21:37:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:10:24.220 21:37:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:10:24.220 21:37:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:10:24.220 21:37:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:10:24.220 21:37:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:10:24.220 21:37:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:10:24.220 21:37:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:10:24.220 21:37:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:10:24.220 21:37:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:10:24.220 21:37:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:10:24.220 21:37:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=65176 00:10:24.220 21:37:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 65176 00:10:24.220 21:37:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 65176 ']' 00:10:24.220 21:37:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:24.220 21:37:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:24.220 21:37:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:24.220 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:24.220 21:37:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:10:24.220 21:37:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:24.220 21:37:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.480 [2024-12-10 21:37:25.067256] Starting SPDK v25.01-pre git sha1 cec5ba284 / DPDK 24.03.0 initialization... 00:10:24.480 [2024-12-10 21:37:25.067518] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65176 ] 00:10:24.480 [2024-12-10 21:37:25.233832] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:24.739 [2024-12-10 21:37:25.370510] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:10:24.998 [2024-12-10 21:37:25.610529] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:24.998 [2024-12-10 21:37:25.610702] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:25.257 21:37:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:25.257 21:37:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:10:25.257 21:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:10:25.257 21:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:25.257 21:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:10:25.257 21:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:10:25.257 21:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:10:25.257 21:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:25.257 21:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:25.257 21:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:25.257 21:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:10:25.257 21:37:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.257 21:37:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.515 malloc1 00:10:25.515 21:37:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.515 21:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:25.515 21:37:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.515 21:37:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.515 [2024-12-10 21:37:26.085512] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:25.515 [2024-12-10 21:37:26.085588] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:25.515 [2024-12-10 21:37:26.085616] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:10:25.515 [2024-12-10 21:37:26.085627] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:25.515 [2024-12-10 21:37:26.088190] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:25.515 [2024-12-10 21:37:26.088235] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:25.515 pt1 00:10:25.515 21:37:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.515 21:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:25.515 21:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:25.515 21:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:10:25.515 21:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:10:25.515 21:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:10:25.515 21:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:25.515 21:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:25.515 21:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:25.516 21:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:10:25.516 21:37:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.516 21:37:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.516 malloc2 00:10:25.516 21:37:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.516 21:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:25.516 21:37:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.516 21:37:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.516 [2024-12-10 21:37:26.147883] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:25.516 [2024-12-10 21:37:26.147954] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:25.516 [2024-12-10 21:37:26.147981] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:10:25.516 [2024-12-10 21:37:26.147992] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:25.516 [2024-12-10 21:37:26.150526] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:25.516 [2024-12-10 21:37:26.150564] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:25.516 pt2 00:10:25.516 21:37:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.516 21:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:25.516 21:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:25.516 21:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:10:25.516 21:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:10:25.516 21:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:10:25.516 21:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:25.516 21:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:25.516 21:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:25.516 21:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:10:25.516 21:37:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.516 21:37:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.516 malloc3 00:10:25.516 21:37:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.516 21:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:25.516 21:37:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.516 21:37:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.516 [2024-12-10 21:37:26.221473] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:25.516 [2024-12-10 21:37:26.221535] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:25.516 [2024-12-10 21:37:26.221561] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:10:25.516 [2024-12-10 21:37:26.221572] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:25.516 [2024-12-10 21:37:26.224037] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:25.516 [2024-12-10 21:37:26.224082] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:25.516 pt3 00:10:25.516 21:37:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.516 21:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:25.516 21:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:25.516 21:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:10:25.516 21:37:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.516 21:37:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.516 [2024-12-10 21:37:26.233494] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:25.516 [2024-12-10 21:37:26.235587] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:25.516 [2024-12-10 21:37:26.235683] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:25.516 [2024-12-10 21:37:26.235881] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:10:25.516 [2024-12-10 21:37:26.235898] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:25.516 [2024-12-10 21:37:26.236210] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:25.516 [2024-12-10 21:37:26.236390] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:10:25.516 [2024-12-10 21:37:26.236401] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:10:25.516 [2024-12-10 21:37:26.236627] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:25.516 21:37:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.516 21:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:10:25.516 21:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:25.516 21:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:25.516 21:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:25.516 21:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:25.516 21:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:25.516 21:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:25.516 21:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:25.516 21:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:25.516 21:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:25.516 21:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:25.516 21:37:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.516 21:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:25.516 21:37:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.516 21:37:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.516 21:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:25.516 "name": "raid_bdev1", 00:10:25.516 "uuid": "8b459325-1ec8-49e0-958e-781da727847b", 00:10:25.516 "strip_size_kb": 64, 00:10:25.516 "state": "online", 00:10:25.516 "raid_level": "raid0", 00:10:25.516 "superblock": true, 00:10:25.516 "num_base_bdevs": 3, 00:10:25.516 "num_base_bdevs_discovered": 3, 00:10:25.516 "num_base_bdevs_operational": 3, 00:10:25.516 "base_bdevs_list": [ 00:10:25.516 { 00:10:25.516 "name": "pt1", 00:10:25.516 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:25.516 "is_configured": true, 00:10:25.516 "data_offset": 2048, 00:10:25.516 "data_size": 63488 00:10:25.516 }, 00:10:25.516 { 00:10:25.516 "name": "pt2", 00:10:25.516 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:25.516 "is_configured": true, 00:10:25.516 "data_offset": 2048, 00:10:25.516 "data_size": 63488 00:10:25.516 }, 00:10:25.516 { 00:10:25.516 "name": "pt3", 00:10:25.516 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:25.516 "is_configured": true, 00:10:25.516 "data_offset": 2048, 00:10:25.516 "data_size": 63488 00:10:25.516 } 00:10:25.516 ] 00:10:25.516 }' 00:10:25.516 21:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:25.516 21:37:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.083 21:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:10:26.083 21:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:26.083 21:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:26.083 21:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:26.083 21:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:26.083 21:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:26.083 21:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:26.083 21:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:26.083 21:37:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.083 21:37:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.083 [2024-12-10 21:37:26.740898] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:26.083 21:37:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.083 21:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:26.083 "name": "raid_bdev1", 00:10:26.083 "aliases": [ 00:10:26.083 "8b459325-1ec8-49e0-958e-781da727847b" 00:10:26.083 ], 00:10:26.083 "product_name": "Raid Volume", 00:10:26.083 "block_size": 512, 00:10:26.083 "num_blocks": 190464, 00:10:26.083 "uuid": "8b459325-1ec8-49e0-958e-781da727847b", 00:10:26.083 "assigned_rate_limits": { 00:10:26.083 "rw_ios_per_sec": 0, 00:10:26.083 "rw_mbytes_per_sec": 0, 00:10:26.083 "r_mbytes_per_sec": 0, 00:10:26.083 "w_mbytes_per_sec": 0 00:10:26.083 }, 00:10:26.083 "claimed": false, 00:10:26.083 "zoned": false, 00:10:26.083 "supported_io_types": { 00:10:26.083 "read": true, 00:10:26.083 "write": true, 00:10:26.083 "unmap": true, 00:10:26.083 "flush": true, 00:10:26.083 "reset": true, 00:10:26.083 "nvme_admin": false, 00:10:26.083 "nvme_io": false, 00:10:26.083 "nvme_io_md": false, 00:10:26.083 "write_zeroes": true, 00:10:26.083 "zcopy": false, 00:10:26.083 "get_zone_info": false, 00:10:26.083 "zone_management": false, 00:10:26.083 "zone_append": false, 00:10:26.083 "compare": false, 00:10:26.083 "compare_and_write": false, 00:10:26.083 "abort": false, 00:10:26.083 "seek_hole": false, 00:10:26.083 "seek_data": false, 00:10:26.083 "copy": false, 00:10:26.083 "nvme_iov_md": false 00:10:26.083 }, 00:10:26.083 "memory_domains": [ 00:10:26.083 { 00:10:26.083 "dma_device_id": "system", 00:10:26.083 "dma_device_type": 1 00:10:26.083 }, 00:10:26.083 { 00:10:26.083 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:26.083 "dma_device_type": 2 00:10:26.083 }, 00:10:26.083 { 00:10:26.083 "dma_device_id": "system", 00:10:26.083 "dma_device_type": 1 00:10:26.083 }, 00:10:26.083 { 00:10:26.083 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:26.083 "dma_device_type": 2 00:10:26.083 }, 00:10:26.083 { 00:10:26.083 "dma_device_id": "system", 00:10:26.083 "dma_device_type": 1 00:10:26.083 }, 00:10:26.083 { 00:10:26.083 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:26.083 "dma_device_type": 2 00:10:26.083 } 00:10:26.083 ], 00:10:26.083 "driver_specific": { 00:10:26.083 "raid": { 00:10:26.083 "uuid": "8b459325-1ec8-49e0-958e-781da727847b", 00:10:26.083 "strip_size_kb": 64, 00:10:26.083 "state": "online", 00:10:26.083 "raid_level": "raid0", 00:10:26.083 "superblock": true, 00:10:26.083 "num_base_bdevs": 3, 00:10:26.083 "num_base_bdevs_discovered": 3, 00:10:26.083 "num_base_bdevs_operational": 3, 00:10:26.083 "base_bdevs_list": [ 00:10:26.083 { 00:10:26.083 "name": "pt1", 00:10:26.083 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:26.083 "is_configured": true, 00:10:26.083 "data_offset": 2048, 00:10:26.083 "data_size": 63488 00:10:26.083 }, 00:10:26.083 { 00:10:26.083 "name": "pt2", 00:10:26.083 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:26.083 "is_configured": true, 00:10:26.083 "data_offset": 2048, 00:10:26.083 "data_size": 63488 00:10:26.083 }, 00:10:26.083 { 00:10:26.083 "name": "pt3", 00:10:26.083 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:26.083 "is_configured": true, 00:10:26.083 "data_offset": 2048, 00:10:26.083 "data_size": 63488 00:10:26.083 } 00:10:26.083 ] 00:10:26.083 } 00:10:26.083 } 00:10:26.083 }' 00:10:26.083 21:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:26.083 21:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:26.083 pt2 00:10:26.083 pt3' 00:10:26.083 21:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:26.083 21:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:26.083 21:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:26.342 21:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:26.342 21:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:26.342 21:37:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.342 21:37:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.342 21:37:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.342 21:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:26.342 21:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:26.342 21:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:26.342 21:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:26.342 21:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:26.342 21:37:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.342 21:37:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.342 21:37:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.342 21:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:26.342 21:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:26.342 21:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:26.342 21:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:26.342 21:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:26.342 21:37:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.342 21:37:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.342 21:37:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.342 21:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:26.342 21:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:26.342 21:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:26.342 21:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:10:26.342 21:37:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.342 21:37:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.342 [2024-12-10 21:37:27.048374] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:26.342 21:37:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.342 21:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=8b459325-1ec8-49e0-958e-781da727847b 00:10:26.342 21:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 8b459325-1ec8-49e0-958e-781da727847b ']' 00:10:26.342 21:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:26.342 21:37:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.342 21:37:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.342 [2024-12-10 21:37:27.111960] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:26.342 [2024-12-10 21:37:27.112070] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:26.342 [2024-12-10 21:37:27.112176] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:26.342 [2024-12-10 21:37:27.112245] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:26.342 [2024-12-10 21:37:27.112256] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:10:26.342 21:37:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.342 21:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:26.342 21:37:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.342 21:37:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.342 21:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:10:26.600 21:37:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.600 21:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:10:26.600 21:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:10:26.600 21:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:26.600 21:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:10:26.600 21:37:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.600 21:37:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.600 21:37:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.600 21:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:26.600 21:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:10:26.600 21:37:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.600 21:37:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.600 21:37:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.600 21:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:26.600 21:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:10:26.600 21:37:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.600 21:37:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.600 21:37:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.600 21:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:10:26.600 21:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:10:26.600 21:37:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.600 21:37:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.600 21:37:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.600 21:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:10:26.601 21:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:10:26.601 21:37:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:10:26.601 21:37:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:10:26.601 21:37:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:10:26.601 21:37:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:26.601 21:37:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:10:26.601 21:37:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:26.601 21:37:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:10:26.601 21:37:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.601 21:37:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.601 [2024-12-10 21:37:27.243809] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:10:26.601 [2024-12-10 21:37:27.245700] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:10:26.601 [2024-12-10 21:37:27.245751] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:10:26.601 [2024-12-10 21:37:27.245809] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:10:26.601 [2024-12-10 21:37:27.245866] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:10:26.601 [2024-12-10 21:37:27.245886] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:10:26.601 [2024-12-10 21:37:27.245902] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:26.601 [2024-12-10 21:37:27.245914] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:10:26.601 request: 00:10:26.601 { 00:10:26.601 "name": "raid_bdev1", 00:10:26.601 "raid_level": "raid0", 00:10:26.601 "base_bdevs": [ 00:10:26.601 "malloc1", 00:10:26.601 "malloc2", 00:10:26.601 "malloc3" 00:10:26.601 ], 00:10:26.601 "strip_size_kb": 64, 00:10:26.601 "superblock": false, 00:10:26.601 "method": "bdev_raid_create", 00:10:26.601 "req_id": 1 00:10:26.601 } 00:10:26.601 Got JSON-RPC error response 00:10:26.601 response: 00:10:26.601 { 00:10:26.601 "code": -17, 00:10:26.601 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:10:26.601 } 00:10:26.601 21:37:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:10:26.601 21:37:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:10:26.601 21:37:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:26.601 21:37:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:26.601 21:37:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:26.601 21:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:26.601 21:37:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.601 21:37:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.601 21:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:10:26.601 21:37:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.601 21:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:10:26.601 21:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:10:26.601 21:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:26.601 21:37:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.601 21:37:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.601 [2024-12-10 21:37:27.299666] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:26.601 [2024-12-10 21:37:27.299823] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:26.601 [2024-12-10 21:37:27.299877] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:10:26.601 [2024-12-10 21:37:27.299911] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:26.601 [2024-12-10 21:37:27.302294] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:26.601 [2024-12-10 21:37:27.302382] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:26.601 [2024-12-10 21:37:27.302555] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:26.601 [2024-12-10 21:37:27.302659] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:26.601 pt1 00:10:26.601 21:37:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.601 21:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:10:26.601 21:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:26.601 21:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:26.601 21:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:26.601 21:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:26.601 21:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:26.601 21:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:26.601 21:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:26.601 21:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:26.601 21:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:26.601 21:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:26.601 21:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:26.601 21:37:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.601 21:37:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.601 21:37:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.601 21:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:26.601 "name": "raid_bdev1", 00:10:26.601 "uuid": "8b459325-1ec8-49e0-958e-781da727847b", 00:10:26.601 "strip_size_kb": 64, 00:10:26.601 "state": "configuring", 00:10:26.601 "raid_level": "raid0", 00:10:26.601 "superblock": true, 00:10:26.601 "num_base_bdevs": 3, 00:10:26.601 "num_base_bdevs_discovered": 1, 00:10:26.601 "num_base_bdevs_operational": 3, 00:10:26.601 "base_bdevs_list": [ 00:10:26.601 { 00:10:26.601 "name": "pt1", 00:10:26.601 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:26.601 "is_configured": true, 00:10:26.601 "data_offset": 2048, 00:10:26.601 "data_size": 63488 00:10:26.601 }, 00:10:26.601 { 00:10:26.601 "name": null, 00:10:26.601 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:26.601 "is_configured": false, 00:10:26.601 "data_offset": 2048, 00:10:26.601 "data_size": 63488 00:10:26.601 }, 00:10:26.601 { 00:10:26.601 "name": null, 00:10:26.601 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:26.601 "is_configured": false, 00:10:26.601 "data_offset": 2048, 00:10:26.601 "data_size": 63488 00:10:26.601 } 00:10:26.601 ] 00:10:26.601 }' 00:10:26.601 21:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:26.601 21:37:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.176 21:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:10:27.176 21:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:27.176 21:37:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.176 21:37:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.177 [2024-12-10 21:37:27.706987] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:27.177 [2024-12-10 21:37:27.707139] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:27.177 [2024-12-10 21:37:27.707174] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:10:27.177 [2024-12-10 21:37:27.707184] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:27.177 [2024-12-10 21:37:27.707730] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:27.177 [2024-12-10 21:37:27.707762] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:27.177 [2024-12-10 21:37:27.707868] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:27.177 [2024-12-10 21:37:27.707901] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:27.177 pt2 00:10:27.177 21:37:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.177 21:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:10:27.177 21:37:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.177 21:37:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.177 [2024-12-10 21:37:27.714965] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:10:27.177 21:37:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.177 21:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:10:27.177 21:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:27.177 21:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:27.177 21:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:27.177 21:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:27.177 21:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:27.177 21:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:27.177 21:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:27.177 21:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:27.177 21:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:27.177 21:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:27.177 21:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:27.177 21:37:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.177 21:37:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.177 21:37:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.177 21:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:27.177 "name": "raid_bdev1", 00:10:27.177 "uuid": "8b459325-1ec8-49e0-958e-781da727847b", 00:10:27.177 "strip_size_kb": 64, 00:10:27.177 "state": "configuring", 00:10:27.177 "raid_level": "raid0", 00:10:27.177 "superblock": true, 00:10:27.177 "num_base_bdevs": 3, 00:10:27.177 "num_base_bdevs_discovered": 1, 00:10:27.177 "num_base_bdevs_operational": 3, 00:10:27.177 "base_bdevs_list": [ 00:10:27.177 { 00:10:27.177 "name": "pt1", 00:10:27.177 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:27.177 "is_configured": true, 00:10:27.177 "data_offset": 2048, 00:10:27.177 "data_size": 63488 00:10:27.177 }, 00:10:27.177 { 00:10:27.177 "name": null, 00:10:27.177 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:27.177 "is_configured": false, 00:10:27.177 "data_offset": 0, 00:10:27.177 "data_size": 63488 00:10:27.177 }, 00:10:27.177 { 00:10:27.177 "name": null, 00:10:27.177 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:27.177 "is_configured": false, 00:10:27.177 "data_offset": 2048, 00:10:27.177 "data_size": 63488 00:10:27.177 } 00:10:27.177 ] 00:10:27.177 }' 00:10:27.177 21:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:27.177 21:37:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.444 21:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:10:27.444 21:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:27.444 21:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:27.444 21:37:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.444 21:37:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.444 [2024-12-10 21:37:28.174218] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:27.444 [2024-12-10 21:37:28.174401] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:27.444 [2024-12-10 21:37:28.174465] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:10:27.444 [2024-12-10 21:37:28.174535] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:27.444 [2024-12-10 21:37:28.175057] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:27.444 [2024-12-10 21:37:28.175127] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:27.444 [2024-12-10 21:37:28.175256] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:27.444 [2024-12-10 21:37:28.175315] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:27.444 pt2 00:10:27.444 21:37:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.444 21:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:27.444 21:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:27.444 21:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:27.444 21:37:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.444 21:37:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.445 [2024-12-10 21:37:28.186198] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:27.445 [2024-12-10 21:37:28.186309] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:27.445 [2024-12-10 21:37:28.186346] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:27.445 [2024-12-10 21:37:28.186395] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:27.445 [2024-12-10 21:37:28.186899] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:27.445 [2024-12-10 21:37:28.186976] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:27.445 [2024-12-10 21:37:28.187097] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:27.445 [2024-12-10 21:37:28.187154] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:27.445 [2024-12-10 21:37:28.187309] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:27.445 [2024-12-10 21:37:28.187353] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:27.445 [2024-12-10 21:37:28.187694] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:27.445 [2024-12-10 21:37:28.187872] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:27.445 [2024-12-10 21:37:28.187882] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:10:27.445 [2024-12-10 21:37:28.188061] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:27.445 pt3 00:10:27.445 21:37:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.445 21:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:27.445 21:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:27.445 21:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:10:27.445 21:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:27.445 21:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:27.445 21:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:27.445 21:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:27.445 21:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:27.445 21:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:27.445 21:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:27.445 21:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:27.445 21:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:27.445 21:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:27.445 21:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:27.445 21:37:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.445 21:37:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.445 21:37:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.703 21:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:27.703 "name": "raid_bdev1", 00:10:27.703 "uuid": "8b459325-1ec8-49e0-958e-781da727847b", 00:10:27.703 "strip_size_kb": 64, 00:10:27.703 "state": "online", 00:10:27.703 "raid_level": "raid0", 00:10:27.703 "superblock": true, 00:10:27.703 "num_base_bdevs": 3, 00:10:27.703 "num_base_bdevs_discovered": 3, 00:10:27.703 "num_base_bdevs_operational": 3, 00:10:27.703 "base_bdevs_list": [ 00:10:27.703 { 00:10:27.703 "name": "pt1", 00:10:27.703 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:27.703 "is_configured": true, 00:10:27.703 "data_offset": 2048, 00:10:27.703 "data_size": 63488 00:10:27.703 }, 00:10:27.703 { 00:10:27.703 "name": "pt2", 00:10:27.703 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:27.703 "is_configured": true, 00:10:27.703 "data_offset": 2048, 00:10:27.703 "data_size": 63488 00:10:27.703 }, 00:10:27.703 { 00:10:27.703 "name": "pt3", 00:10:27.703 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:27.703 "is_configured": true, 00:10:27.703 "data_offset": 2048, 00:10:27.703 "data_size": 63488 00:10:27.703 } 00:10:27.703 ] 00:10:27.703 }' 00:10:27.703 21:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:27.703 21:37:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.961 21:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:10:27.961 21:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:27.961 21:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:27.961 21:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:27.961 21:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:27.961 21:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:27.961 21:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:27.961 21:37:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.961 21:37:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.961 21:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:27.961 [2024-12-10 21:37:28.653744] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:27.961 21:37:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.961 21:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:27.961 "name": "raid_bdev1", 00:10:27.961 "aliases": [ 00:10:27.961 "8b459325-1ec8-49e0-958e-781da727847b" 00:10:27.961 ], 00:10:27.961 "product_name": "Raid Volume", 00:10:27.961 "block_size": 512, 00:10:27.961 "num_blocks": 190464, 00:10:27.961 "uuid": "8b459325-1ec8-49e0-958e-781da727847b", 00:10:27.961 "assigned_rate_limits": { 00:10:27.961 "rw_ios_per_sec": 0, 00:10:27.961 "rw_mbytes_per_sec": 0, 00:10:27.961 "r_mbytes_per_sec": 0, 00:10:27.961 "w_mbytes_per_sec": 0 00:10:27.961 }, 00:10:27.961 "claimed": false, 00:10:27.961 "zoned": false, 00:10:27.961 "supported_io_types": { 00:10:27.961 "read": true, 00:10:27.961 "write": true, 00:10:27.961 "unmap": true, 00:10:27.961 "flush": true, 00:10:27.961 "reset": true, 00:10:27.961 "nvme_admin": false, 00:10:27.961 "nvme_io": false, 00:10:27.961 "nvme_io_md": false, 00:10:27.961 "write_zeroes": true, 00:10:27.961 "zcopy": false, 00:10:27.961 "get_zone_info": false, 00:10:27.961 "zone_management": false, 00:10:27.961 "zone_append": false, 00:10:27.961 "compare": false, 00:10:27.961 "compare_and_write": false, 00:10:27.961 "abort": false, 00:10:27.961 "seek_hole": false, 00:10:27.961 "seek_data": false, 00:10:27.961 "copy": false, 00:10:27.961 "nvme_iov_md": false 00:10:27.961 }, 00:10:27.961 "memory_domains": [ 00:10:27.961 { 00:10:27.961 "dma_device_id": "system", 00:10:27.961 "dma_device_type": 1 00:10:27.961 }, 00:10:27.961 { 00:10:27.961 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:27.961 "dma_device_type": 2 00:10:27.961 }, 00:10:27.961 { 00:10:27.961 "dma_device_id": "system", 00:10:27.961 "dma_device_type": 1 00:10:27.961 }, 00:10:27.961 { 00:10:27.961 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:27.961 "dma_device_type": 2 00:10:27.961 }, 00:10:27.961 { 00:10:27.961 "dma_device_id": "system", 00:10:27.961 "dma_device_type": 1 00:10:27.961 }, 00:10:27.961 { 00:10:27.961 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:27.961 "dma_device_type": 2 00:10:27.961 } 00:10:27.961 ], 00:10:27.961 "driver_specific": { 00:10:27.961 "raid": { 00:10:27.961 "uuid": "8b459325-1ec8-49e0-958e-781da727847b", 00:10:27.961 "strip_size_kb": 64, 00:10:27.961 "state": "online", 00:10:27.961 "raid_level": "raid0", 00:10:27.961 "superblock": true, 00:10:27.961 "num_base_bdevs": 3, 00:10:27.961 "num_base_bdevs_discovered": 3, 00:10:27.961 "num_base_bdevs_operational": 3, 00:10:27.961 "base_bdevs_list": [ 00:10:27.961 { 00:10:27.961 "name": "pt1", 00:10:27.961 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:27.961 "is_configured": true, 00:10:27.961 "data_offset": 2048, 00:10:27.961 "data_size": 63488 00:10:27.961 }, 00:10:27.961 { 00:10:27.961 "name": "pt2", 00:10:27.961 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:27.961 "is_configured": true, 00:10:27.961 "data_offset": 2048, 00:10:27.961 "data_size": 63488 00:10:27.961 }, 00:10:27.961 { 00:10:27.961 "name": "pt3", 00:10:27.961 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:27.961 "is_configured": true, 00:10:27.961 "data_offset": 2048, 00:10:27.961 "data_size": 63488 00:10:27.961 } 00:10:27.961 ] 00:10:27.961 } 00:10:27.961 } 00:10:27.961 }' 00:10:27.961 21:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:27.961 21:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:27.961 pt2 00:10:27.961 pt3' 00:10:27.961 21:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:28.220 21:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:28.220 21:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:28.220 21:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:28.220 21:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:28.220 21:37:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.220 21:37:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.220 21:37:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.220 21:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:28.220 21:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:28.220 21:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:28.220 21:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:28.220 21:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:28.220 21:37:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.220 21:37:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.220 21:37:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.220 21:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:28.220 21:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:28.220 21:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:28.220 21:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:28.220 21:37:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.220 21:37:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.220 21:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:28.220 21:37:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.220 21:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:28.220 21:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:28.220 21:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:28.220 21:37:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.220 21:37:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.220 21:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:10:28.220 [2024-12-10 21:37:28.905273] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:28.220 21:37:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.220 21:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 8b459325-1ec8-49e0-958e-781da727847b '!=' 8b459325-1ec8-49e0-958e-781da727847b ']' 00:10:28.220 21:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:10:28.220 21:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:28.220 21:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:28.220 21:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 65176 00:10:28.220 21:37:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 65176 ']' 00:10:28.220 21:37:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 65176 00:10:28.220 21:37:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:10:28.220 21:37:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:28.220 21:37:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65176 00:10:28.220 21:37:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:28.220 21:37:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:28.220 21:37:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65176' 00:10:28.220 killing process with pid 65176 00:10:28.220 21:37:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 65176 00:10:28.220 [2024-12-10 21:37:28.971799] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:28.220 21:37:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 65176 00:10:28.220 [2024-12-10 21:37:28.971992] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:28.220 [2024-12-10 21:37:28.972066] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:28.220 [2024-12-10 21:37:28.972134] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:10:28.787 [2024-12-10 21:37:29.296673] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:29.721 21:37:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:10:29.721 00:10:29.721 real 0m5.482s 00:10:29.721 user 0m7.900s 00:10:29.721 sys 0m0.889s 00:10:29.721 21:37:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:29.721 21:37:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.721 ************************************ 00:10:29.721 END TEST raid_superblock_test 00:10:29.721 ************************************ 00:10:29.721 21:37:30 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 3 read 00:10:29.721 21:37:30 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:29.721 21:37:30 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:29.721 21:37:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:29.980 ************************************ 00:10:29.980 START TEST raid_read_error_test 00:10:29.980 ************************************ 00:10:29.980 21:37:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 3 read 00:10:29.980 21:37:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:10:29.980 21:37:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:10:29.980 21:37:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:10:29.980 21:37:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:29.980 21:37:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:29.981 21:37:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:29.981 21:37:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:29.981 21:37:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:29.981 21:37:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:29.981 21:37:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:29.981 21:37:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:29.981 21:37:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:29.981 21:37:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:29.981 21:37:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:29.981 21:37:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:29.981 21:37:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:29.981 21:37:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:29.981 21:37:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:29.981 21:37:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:29.981 21:37:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:29.981 21:37:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:29.981 21:37:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:10:29.981 21:37:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:10:29.981 21:37:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:10:29.981 21:37:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:29.981 21:37:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.5UY8ggxKB3 00:10:29.981 21:37:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=65430 00:10:29.981 21:37:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 65430 00:10:29.981 21:37:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:29.981 21:37:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 65430 ']' 00:10:29.981 21:37:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:29.981 21:37:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:29.981 21:37:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:29.981 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:29.981 21:37:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:29.981 21:37:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.981 [2024-12-10 21:37:30.614064] Starting SPDK v25.01-pre git sha1 cec5ba284 / DPDK 24.03.0 initialization... 00:10:29.981 [2024-12-10 21:37:30.614302] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65430 ] 00:10:30.239 [2024-12-10 21:37:30.791507] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:30.239 [2024-12-10 21:37:30.909585] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:10:30.497 [2024-12-10 21:37:31.112695] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:30.497 [2024-12-10 21:37:31.112746] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:31.062 21:37:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:31.062 21:37:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:10:31.062 21:37:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:31.062 21:37:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:31.063 21:37:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.063 21:37:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.063 BaseBdev1_malloc 00:10:31.063 21:37:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.063 21:37:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:31.063 21:37:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.063 21:37:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.063 true 00:10:31.063 21:37:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.063 21:37:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:31.063 21:37:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.063 21:37:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.063 [2024-12-10 21:37:31.601351] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:31.063 [2024-12-10 21:37:31.601502] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:31.063 [2024-12-10 21:37:31.601554] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:31.063 [2024-12-10 21:37:31.601569] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:31.063 [2024-12-10 21:37:31.604072] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:31.063 [2024-12-10 21:37:31.604119] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:31.063 BaseBdev1 00:10:31.063 21:37:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.063 21:37:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:31.063 21:37:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:31.063 21:37:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.063 21:37:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.063 BaseBdev2_malloc 00:10:31.063 21:37:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.063 21:37:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:31.063 21:37:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.063 21:37:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.063 true 00:10:31.063 21:37:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.063 21:37:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:31.063 21:37:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.063 21:37:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.063 [2024-12-10 21:37:31.662639] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:31.063 [2024-12-10 21:37:31.662771] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:31.063 [2024-12-10 21:37:31.662799] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:31.063 [2024-12-10 21:37:31.662813] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:31.063 [2024-12-10 21:37:31.665412] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:31.063 [2024-12-10 21:37:31.665461] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:31.063 BaseBdev2 00:10:31.063 21:37:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.063 21:37:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:31.063 21:37:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:31.063 21:37:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.063 21:37:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.063 BaseBdev3_malloc 00:10:31.063 21:37:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.063 21:37:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:31.063 21:37:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.063 21:37:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.063 true 00:10:31.063 21:37:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.063 21:37:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:31.063 21:37:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.063 21:37:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.063 [2024-12-10 21:37:31.747315] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:31.063 [2024-12-10 21:37:31.747446] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:31.063 [2024-12-10 21:37:31.747474] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:31.063 [2024-12-10 21:37:31.747487] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:31.063 [2024-12-10 21:37:31.749948] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:31.063 [2024-12-10 21:37:31.749998] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:31.063 BaseBdev3 00:10:31.063 21:37:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.063 21:37:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:10:31.063 21:37:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.063 21:37:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.063 [2024-12-10 21:37:31.755382] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:31.063 [2024-12-10 21:37:31.757519] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:31.063 [2024-12-10 21:37:31.757677] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:31.063 [2024-12-10 21:37:31.757967] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:31.063 [2024-12-10 21:37:31.757988] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:31.063 [2024-12-10 21:37:31.758319] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:10:31.063 [2024-12-10 21:37:31.758526] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:31.063 [2024-12-10 21:37:31.758548] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:10:31.063 [2024-12-10 21:37:31.758725] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:31.063 21:37:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.063 21:37:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:10:31.063 21:37:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:31.063 21:37:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:31.063 21:37:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:31.063 21:37:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:31.063 21:37:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:31.063 21:37:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:31.063 21:37:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:31.063 21:37:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:31.063 21:37:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:31.063 21:37:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:31.063 21:37:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.063 21:37:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.063 21:37:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:31.063 21:37:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.063 21:37:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:31.063 "name": "raid_bdev1", 00:10:31.063 "uuid": "0562168a-0021-4e31-9295-8ac2db9baaab", 00:10:31.063 "strip_size_kb": 64, 00:10:31.063 "state": "online", 00:10:31.063 "raid_level": "raid0", 00:10:31.063 "superblock": true, 00:10:31.063 "num_base_bdevs": 3, 00:10:31.063 "num_base_bdevs_discovered": 3, 00:10:31.063 "num_base_bdevs_operational": 3, 00:10:31.063 "base_bdevs_list": [ 00:10:31.063 { 00:10:31.063 "name": "BaseBdev1", 00:10:31.063 "uuid": "59458bb4-b2a1-553b-bf3f-f59863abc8de", 00:10:31.063 "is_configured": true, 00:10:31.063 "data_offset": 2048, 00:10:31.063 "data_size": 63488 00:10:31.063 }, 00:10:31.063 { 00:10:31.063 "name": "BaseBdev2", 00:10:31.063 "uuid": "e498b3ba-50e7-54bb-a399-9a46b8da43e2", 00:10:31.063 "is_configured": true, 00:10:31.063 "data_offset": 2048, 00:10:31.063 "data_size": 63488 00:10:31.063 }, 00:10:31.063 { 00:10:31.063 "name": "BaseBdev3", 00:10:31.063 "uuid": "5a7d0cff-056c-58de-9f05-819f1ad4eb6e", 00:10:31.063 "is_configured": true, 00:10:31.063 "data_offset": 2048, 00:10:31.063 "data_size": 63488 00:10:31.063 } 00:10:31.063 ] 00:10:31.063 }' 00:10:31.063 21:37:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:31.063 21:37:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.629 21:37:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:31.629 21:37:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:31.629 [2024-12-10 21:37:32.319926] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:10:32.596 21:37:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:10:32.596 21:37:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.596 21:37:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.596 21:37:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.596 21:37:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:32.596 21:37:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:10:32.596 21:37:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:10:32.596 21:37:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:10:32.596 21:37:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:32.596 21:37:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:32.596 21:37:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:32.596 21:37:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:32.596 21:37:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:32.596 21:37:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:32.596 21:37:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:32.596 21:37:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:32.596 21:37:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:32.596 21:37:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:32.596 21:37:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:32.596 21:37:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.596 21:37:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.596 21:37:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.596 21:37:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:32.596 "name": "raid_bdev1", 00:10:32.596 "uuid": "0562168a-0021-4e31-9295-8ac2db9baaab", 00:10:32.596 "strip_size_kb": 64, 00:10:32.596 "state": "online", 00:10:32.596 "raid_level": "raid0", 00:10:32.596 "superblock": true, 00:10:32.596 "num_base_bdevs": 3, 00:10:32.596 "num_base_bdevs_discovered": 3, 00:10:32.596 "num_base_bdevs_operational": 3, 00:10:32.596 "base_bdevs_list": [ 00:10:32.596 { 00:10:32.596 "name": "BaseBdev1", 00:10:32.596 "uuid": "59458bb4-b2a1-553b-bf3f-f59863abc8de", 00:10:32.596 "is_configured": true, 00:10:32.596 "data_offset": 2048, 00:10:32.596 "data_size": 63488 00:10:32.596 }, 00:10:32.596 { 00:10:32.596 "name": "BaseBdev2", 00:10:32.596 "uuid": "e498b3ba-50e7-54bb-a399-9a46b8da43e2", 00:10:32.596 "is_configured": true, 00:10:32.596 "data_offset": 2048, 00:10:32.596 "data_size": 63488 00:10:32.596 }, 00:10:32.596 { 00:10:32.596 "name": "BaseBdev3", 00:10:32.596 "uuid": "5a7d0cff-056c-58de-9f05-819f1ad4eb6e", 00:10:32.596 "is_configured": true, 00:10:32.596 "data_offset": 2048, 00:10:32.596 "data_size": 63488 00:10:32.596 } 00:10:32.596 ] 00:10:32.596 }' 00:10:32.596 21:37:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:32.596 21:37:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.164 21:37:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:33.164 21:37:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.164 21:37:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.164 [2024-12-10 21:37:33.644343] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:33.164 [2024-12-10 21:37:33.644459] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:33.164 [2024-12-10 21:37:33.647731] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:33.164 [2024-12-10 21:37:33.647833] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:33.164 [2024-12-10 21:37:33.647889] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:33.164 [2024-12-10 21:37:33.647900] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:10:33.164 { 00:10:33.164 "results": [ 00:10:33.164 { 00:10:33.164 "job": "raid_bdev1", 00:10:33.164 "core_mask": "0x1", 00:10:33.164 "workload": "randrw", 00:10:33.164 "percentage": 50, 00:10:33.164 "status": "finished", 00:10:33.164 "queue_depth": 1, 00:10:33.164 "io_size": 131072, 00:10:33.164 "runtime": 1.325243, 00:10:33.164 "iops": 13841.235154609381, 00:10:33.164 "mibps": 1730.1543943261727, 00:10:33.164 "io_failed": 1, 00:10:33.164 "io_timeout": 0, 00:10:33.164 "avg_latency_us": 99.90155723609162, 00:10:33.164 "min_latency_us": 24.370305676855896, 00:10:33.164 "max_latency_us": 1738.564192139738 00:10:33.164 } 00:10:33.164 ], 00:10:33.164 "core_count": 1 00:10:33.164 } 00:10:33.164 21:37:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.164 21:37:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 65430 00:10:33.164 21:37:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 65430 ']' 00:10:33.164 21:37:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 65430 00:10:33.164 21:37:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:10:33.164 21:37:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:33.164 21:37:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65430 00:10:33.164 21:37:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:33.164 21:37:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:33.164 21:37:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65430' 00:10:33.164 killing process with pid 65430 00:10:33.164 21:37:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 65430 00:10:33.164 21:37:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 65430 00:10:33.164 [2024-12-10 21:37:33.691488] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:33.422 [2024-12-10 21:37:33.956552] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:34.797 21:37:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:34.797 21:37:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:34.797 21:37:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.5UY8ggxKB3 00:10:34.797 21:37:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.75 00:10:34.797 21:37:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:10:34.797 21:37:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:34.797 21:37:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:34.797 ************************************ 00:10:34.797 END TEST raid_read_error_test 00:10:34.797 ************************************ 00:10:34.797 21:37:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.75 != \0\.\0\0 ]] 00:10:34.797 00:10:34.797 real 0m4.735s 00:10:34.797 user 0m5.668s 00:10:34.797 sys 0m0.529s 00:10:34.797 21:37:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:34.797 21:37:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.797 21:37:35 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 3 write 00:10:34.797 21:37:35 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:34.797 21:37:35 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:34.797 21:37:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:34.797 ************************************ 00:10:34.797 START TEST raid_write_error_test 00:10:34.797 ************************************ 00:10:34.797 21:37:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 3 write 00:10:34.797 21:37:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:10:34.797 21:37:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:10:34.797 21:37:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:10:34.797 21:37:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:34.797 21:37:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:34.797 21:37:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:34.797 21:37:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:34.797 21:37:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:34.797 21:37:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:34.797 21:37:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:34.797 21:37:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:34.797 21:37:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:34.797 21:37:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:34.797 21:37:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:34.797 21:37:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:34.797 21:37:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:34.797 21:37:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:34.797 21:37:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:34.797 21:37:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:34.797 21:37:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:34.797 21:37:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:34.797 21:37:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:10:34.797 21:37:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:10:34.797 21:37:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:10:34.797 21:37:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:34.797 21:37:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.thOMJbm1xP 00:10:34.797 21:37:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=65576 00:10:34.797 21:37:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 65576 00:10:34.797 21:37:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:34.797 21:37:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 65576 ']' 00:10:34.797 21:37:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:34.797 21:37:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:34.797 21:37:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:34.797 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:34.797 21:37:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:34.797 21:37:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.797 [2024-12-10 21:37:35.402972] Starting SPDK v25.01-pre git sha1 cec5ba284 / DPDK 24.03.0 initialization... 00:10:34.797 [2024-12-10 21:37:35.403100] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65576 ] 00:10:35.055 [2024-12-10 21:37:35.581520] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:35.055 [2024-12-10 21:37:35.715610] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:10:35.312 [2024-12-10 21:37:35.933369] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:35.312 [2024-12-10 21:37:35.933435] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:35.570 21:37:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:35.570 21:37:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:10:35.570 21:37:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:35.570 21:37:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:35.570 21:37:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.570 21:37:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.570 BaseBdev1_malloc 00:10:35.570 21:37:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.570 21:37:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:35.570 21:37:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.570 21:37:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.570 true 00:10:35.570 21:37:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.570 21:37:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:35.570 21:37:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.570 21:37:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.570 [2024-12-10 21:37:36.321193] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:35.570 [2024-12-10 21:37:36.321255] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:35.570 [2024-12-10 21:37:36.321277] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:35.570 [2024-12-10 21:37:36.321288] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:35.570 [2024-12-10 21:37:36.323663] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:35.570 [2024-12-10 21:37:36.323787] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:35.570 BaseBdev1 00:10:35.570 21:37:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.570 21:37:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:35.570 21:37:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:35.570 21:37:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.570 21:37:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.828 BaseBdev2_malloc 00:10:35.828 21:37:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.828 21:37:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:35.828 21:37:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.828 21:37:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.828 true 00:10:35.828 21:37:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.828 21:37:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:35.828 21:37:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.828 21:37:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.828 [2024-12-10 21:37:36.382177] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:35.828 [2024-12-10 21:37:36.382237] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:35.828 [2024-12-10 21:37:36.382254] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:35.828 [2024-12-10 21:37:36.382264] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:35.828 [2024-12-10 21:37:36.384555] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:35.828 [2024-12-10 21:37:36.384598] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:35.828 BaseBdev2 00:10:35.828 21:37:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.828 21:37:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:35.828 21:37:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:35.829 21:37:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.829 21:37:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.829 BaseBdev3_malloc 00:10:35.829 21:37:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.829 21:37:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:35.829 21:37:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.829 21:37:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.829 true 00:10:35.829 21:37:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.829 21:37:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:35.829 21:37:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.829 21:37:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.829 [2024-12-10 21:37:36.456998] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:35.829 [2024-12-10 21:37:36.457060] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:35.829 [2024-12-10 21:37:36.457097] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:35.829 [2024-12-10 21:37:36.457109] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:35.829 [2024-12-10 21:37:36.459573] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:35.829 [2024-12-10 21:37:36.459696] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:35.829 BaseBdev3 00:10:35.829 21:37:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.829 21:37:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:10:35.829 21:37:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.829 21:37:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.829 [2024-12-10 21:37:36.465077] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:35.829 [2024-12-10 21:37:36.467160] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:35.829 [2024-12-10 21:37:36.467314] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:35.829 [2024-12-10 21:37:36.467609] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:35.829 [2024-12-10 21:37:36.467628] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:35.829 [2024-12-10 21:37:36.467932] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:10:35.829 [2024-12-10 21:37:36.468123] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:35.829 [2024-12-10 21:37:36.468138] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:10:35.829 [2024-12-10 21:37:36.468319] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:35.829 21:37:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.829 21:37:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:10:35.829 21:37:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:35.829 21:37:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:35.829 21:37:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:35.829 21:37:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:35.829 21:37:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:35.829 21:37:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:35.829 21:37:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:35.829 21:37:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:35.829 21:37:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:35.829 21:37:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:35.829 21:37:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:35.829 21:37:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.829 21:37:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.829 21:37:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.829 21:37:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:35.829 "name": "raid_bdev1", 00:10:35.829 "uuid": "bfdd8379-73aa-4773-9154-63f0fbb2292b", 00:10:35.829 "strip_size_kb": 64, 00:10:35.829 "state": "online", 00:10:35.829 "raid_level": "raid0", 00:10:35.829 "superblock": true, 00:10:35.829 "num_base_bdevs": 3, 00:10:35.829 "num_base_bdevs_discovered": 3, 00:10:35.829 "num_base_bdevs_operational": 3, 00:10:35.829 "base_bdevs_list": [ 00:10:35.829 { 00:10:35.829 "name": "BaseBdev1", 00:10:35.829 "uuid": "4bd326ca-9e12-5f45-91ba-a3dbd151c014", 00:10:35.829 "is_configured": true, 00:10:35.829 "data_offset": 2048, 00:10:35.829 "data_size": 63488 00:10:35.829 }, 00:10:35.829 { 00:10:35.829 "name": "BaseBdev2", 00:10:35.829 "uuid": "0f845ec6-d53e-5229-b4b1-0e058769ec9a", 00:10:35.829 "is_configured": true, 00:10:35.829 "data_offset": 2048, 00:10:35.829 "data_size": 63488 00:10:35.829 }, 00:10:35.829 { 00:10:35.829 "name": "BaseBdev3", 00:10:35.829 "uuid": "589ca764-b83d-5d24-b4b2-7b7371724f91", 00:10:35.829 "is_configured": true, 00:10:35.829 "data_offset": 2048, 00:10:35.829 "data_size": 63488 00:10:35.829 } 00:10:35.829 ] 00:10:35.829 }' 00:10:35.829 21:37:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:35.829 21:37:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.395 21:37:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:36.395 21:37:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:36.395 [2024-12-10 21:37:37.033525] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:10:37.330 21:37:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:10:37.330 21:37:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.330 21:37:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.330 21:37:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.330 21:37:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:37.330 21:37:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:10:37.330 21:37:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:10:37.330 21:37:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:10:37.330 21:37:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:37.330 21:37:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:37.330 21:37:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:37.330 21:37:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:37.330 21:37:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:37.330 21:37:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:37.330 21:37:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:37.330 21:37:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:37.330 21:37:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:37.330 21:37:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:37.330 21:37:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:37.330 21:37:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.330 21:37:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.330 21:37:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.330 21:37:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:37.330 "name": "raid_bdev1", 00:10:37.330 "uuid": "bfdd8379-73aa-4773-9154-63f0fbb2292b", 00:10:37.330 "strip_size_kb": 64, 00:10:37.330 "state": "online", 00:10:37.330 "raid_level": "raid0", 00:10:37.330 "superblock": true, 00:10:37.330 "num_base_bdevs": 3, 00:10:37.330 "num_base_bdevs_discovered": 3, 00:10:37.330 "num_base_bdevs_operational": 3, 00:10:37.330 "base_bdevs_list": [ 00:10:37.330 { 00:10:37.330 "name": "BaseBdev1", 00:10:37.330 "uuid": "4bd326ca-9e12-5f45-91ba-a3dbd151c014", 00:10:37.330 "is_configured": true, 00:10:37.330 "data_offset": 2048, 00:10:37.330 "data_size": 63488 00:10:37.330 }, 00:10:37.330 { 00:10:37.330 "name": "BaseBdev2", 00:10:37.330 "uuid": "0f845ec6-d53e-5229-b4b1-0e058769ec9a", 00:10:37.330 "is_configured": true, 00:10:37.330 "data_offset": 2048, 00:10:37.330 "data_size": 63488 00:10:37.330 }, 00:10:37.330 { 00:10:37.330 "name": "BaseBdev3", 00:10:37.330 "uuid": "589ca764-b83d-5d24-b4b2-7b7371724f91", 00:10:37.330 "is_configured": true, 00:10:37.330 "data_offset": 2048, 00:10:37.330 "data_size": 63488 00:10:37.330 } 00:10:37.330 ] 00:10:37.330 }' 00:10:37.330 21:37:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:37.330 21:37:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.897 21:37:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:37.897 21:37:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.897 21:37:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.897 [2024-12-10 21:37:38.398226] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:37.897 [2024-12-10 21:37:38.398260] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:37.897 [2024-12-10 21:37:38.401556] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:37.897 [2024-12-10 21:37:38.401607] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:37.897 [2024-12-10 21:37:38.401647] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:37.897 [2024-12-10 21:37:38.401657] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:10:37.897 { 00:10:37.897 "results": [ 00:10:37.897 { 00:10:37.897 "job": "raid_bdev1", 00:10:37.897 "core_mask": "0x1", 00:10:37.897 "workload": "randrw", 00:10:37.897 "percentage": 50, 00:10:37.897 "status": "finished", 00:10:37.897 "queue_depth": 1, 00:10:37.897 "io_size": 131072, 00:10:37.897 "runtime": 1.365437, 00:10:37.897 "iops": 13472.609867756622, 00:10:37.897 "mibps": 1684.0762334695778, 00:10:37.897 "io_failed": 1, 00:10:37.897 "io_timeout": 0, 00:10:37.897 "avg_latency_us": 102.5982810468671, 00:10:37.897 "min_latency_us": 20.90480349344978, 00:10:37.897 "max_latency_us": 1631.2454148471616 00:10:37.897 } 00:10:37.897 ], 00:10:37.897 "core_count": 1 00:10:37.897 } 00:10:37.897 21:37:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.897 21:37:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 65576 00:10:37.897 21:37:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 65576 ']' 00:10:37.897 21:37:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 65576 00:10:37.897 21:37:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:10:37.897 21:37:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:37.897 21:37:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65576 00:10:37.897 killing process with pid 65576 00:10:37.897 21:37:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:37.897 21:37:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:37.897 21:37:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65576' 00:10:37.897 21:37:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 65576 00:10:37.897 21:37:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 65576 00:10:37.897 [2024-12-10 21:37:38.441719] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:38.155 [2024-12-10 21:37:38.692346] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:39.529 21:37:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.thOMJbm1xP 00:10:39.529 21:37:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:39.529 21:37:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:39.529 21:37:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:10:39.529 21:37:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:10:39.529 ************************************ 00:10:39.529 END TEST raid_write_error_test 00:10:39.529 ************************************ 00:10:39.529 21:37:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:39.529 21:37:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:39.529 21:37:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:10:39.529 00:10:39.529 real 0m4.768s 00:10:39.529 user 0m5.632s 00:10:39.529 sys 0m0.572s 00:10:39.529 21:37:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:39.529 21:37:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.529 21:37:40 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:10:39.529 21:37:40 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 3 false 00:10:39.529 21:37:40 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:39.529 21:37:40 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:39.529 21:37:40 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:39.529 ************************************ 00:10:39.529 START TEST raid_state_function_test 00:10:39.529 ************************************ 00:10:39.529 21:37:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 3 false 00:10:39.529 21:37:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:10:39.529 21:37:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:10:39.529 21:37:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:10:39.529 21:37:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:39.529 21:37:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:39.529 21:37:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:39.529 21:37:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:39.529 21:37:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:39.529 21:37:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:39.529 21:37:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:39.529 21:37:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:39.529 21:37:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:39.529 21:37:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:39.529 21:37:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:39.529 21:37:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:39.529 21:37:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:39.529 21:37:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:39.529 21:37:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:39.529 21:37:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:39.529 21:37:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:39.529 21:37:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:39.529 21:37:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:10:39.529 21:37:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:39.529 21:37:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:39.529 21:37:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:10:39.529 21:37:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:10:39.529 21:37:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=65719 00:10:39.529 21:37:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:39.529 21:37:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 65719' 00:10:39.529 Process raid pid: 65719 00:10:39.530 21:37:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 65719 00:10:39.530 21:37:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 65719 ']' 00:10:39.530 21:37:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:39.530 21:37:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:39.530 21:37:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:39.530 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:39.530 21:37:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:39.530 21:37:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.530 [2024-12-10 21:37:40.229179] Starting SPDK v25.01-pre git sha1 cec5ba284 / DPDK 24.03.0 initialization... 00:10:39.530 [2024-12-10 21:37:40.229394] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:39.787 [2024-12-10 21:37:40.409663] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:39.787 [2024-12-10 21:37:40.541409] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:10:40.046 [2024-12-10 21:37:40.762889] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:40.046 [2024-12-10 21:37:40.763042] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:40.612 21:37:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:40.612 21:37:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:10:40.612 21:37:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:40.612 21:37:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.612 21:37:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.612 [2024-12-10 21:37:41.133186] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:40.612 [2024-12-10 21:37:41.133289] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:40.612 [2024-12-10 21:37:41.133331] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:40.612 [2024-12-10 21:37:41.133364] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:40.612 [2024-12-10 21:37:41.133407] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:40.612 [2024-12-10 21:37:41.133444] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:40.612 21:37:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.612 21:37:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:40.612 21:37:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:40.612 21:37:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:40.612 21:37:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:40.612 21:37:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:40.612 21:37:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:40.612 21:37:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:40.612 21:37:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:40.612 21:37:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:40.612 21:37:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:40.612 21:37:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:40.612 21:37:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:40.612 21:37:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.612 21:37:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.612 21:37:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.612 21:37:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:40.612 "name": "Existed_Raid", 00:10:40.612 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:40.612 "strip_size_kb": 64, 00:10:40.612 "state": "configuring", 00:10:40.612 "raid_level": "concat", 00:10:40.612 "superblock": false, 00:10:40.612 "num_base_bdevs": 3, 00:10:40.612 "num_base_bdevs_discovered": 0, 00:10:40.612 "num_base_bdevs_operational": 3, 00:10:40.612 "base_bdevs_list": [ 00:10:40.612 { 00:10:40.612 "name": "BaseBdev1", 00:10:40.612 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:40.612 "is_configured": false, 00:10:40.612 "data_offset": 0, 00:10:40.612 "data_size": 0 00:10:40.612 }, 00:10:40.612 { 00:10:40.612 "name": "BaseBdev2", 00:10:40.612 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:40.612 "is_configured": false, 00:10:40.612 "data_offset": 0, 00:10:40.612 "data_size": 0 00:10:40.612 }, 00:10:40.612 { 00:10:40.612 "name": "BaseBdev3", 00:10:40.612 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:40.612 "is_configured": false, 00:10:40.612 "data_offset": 0, 00:10:40.612 "data_size": 0 00:10:40.612 } 00:10:40.612 ] 00:10:40.612 }' 00:10:40.612 21:37:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:40.612 21:37:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.870 21:37:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:40.871 21:37:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.871 21:37:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.871 [2024-12-10 21:37:41.620312] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:40.871 [2024-12-10 21:37:41.620413] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:40.871 21:37:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.871 21:37:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:40.871 21:37:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.871 21:37:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.871 [2024-12-10 21:37:41.632298] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:40.871 [2024-12-10 21:37:41.632392] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:40.871 [2024-12-10 21:37:41.632444] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:40.871 [2024-12-10 21:37:41.632476] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:40.871 [2024-12-10 21:37:41.632543] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:40.871 [2024-12-10 21:37:41.632570] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:40.871 21:37:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.871 21:37:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:40.871 21:37:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.871 21:37:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.129 [2024-12-10 21:37:41.682977] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:41.129 BaseBdev1 00:10:41.129 21:37:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.129 21:37:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:41.129 21:37:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:41.129 21:37:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:41.129 21:37:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:41.129 21:37:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:41.129 21:37:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:41.129 21:37:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:41.129 21:37:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.129 21:37:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.129 21:37:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.129 21:37:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:41.129 21:37:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.129 21:37:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.129 [ 00:10:41.129 { 00:10:41.129 "name": "BaseBdev1", 00:10:41.129 "aliases": [ 00:10:41.129 "68f4a771-ccba-43b2-895e-f74fd0c3e4e5" 00:10:41.129 ], 00:10:41.129 "product_name": "Malloc disk", 00:10:41.129 "block_size": 512, 00:10:41.129 "num_blocks": 65536, 00:10:41.129 "uuid": "68f4a771-ccba-43b2-895e-f74fd0c3e4e5", 00:10:41.129 "assigned_rate_limits": { 00:10:41.129 "rw_ios_per_sec": 0, 00:10:41.129 "rw_mbytes_per_sec": 0, 00:10:41.129 "r_mbytes_per_sec": 0, 00:10:41.129 "w_mbytes_per_sec": 0 00:10:41.129 }, 00:10:41.129 "claimed": true, 00:10:41.129 "claim_type": "exclusive_write", 00:10:41.129 "zoned": false, 00:10:41.129 "supported_io_types": { 00:10:41.129 "read": true, 00:10:41.129 "write": true, 00:10:41.129 "unmap": true, 00:10:41.129 "flush": true, 00:10:41.129 "reset": true, 00:10:41.129 "nvme_admin": false, 00:10:41.129 "nvme_io": false, 00:10:41.129 "nvme_io_md": false, 00:10:41.129 "write_zeroes": true, 00:10:41.129 "zcopy": true, 00:10:41.129 "get_zone_info": false, 00:10:41.129 "zone_management": false, 00:10:41.129 "zone_append": false, 00:10:41.129 "compare": false, 00:10:41.129 "compare_and_write": false, 00:10:41.129 "abort": true, 00:10:41.129 "seek_hole": false, 00:10:41.129 "seek_data": false, 00:10:41.129 "copy": true, 00:10:41.129 "nvme_iov_md": false 00:10:41.129 }, 00:10:41.129 "memory_domains": [ 00:10:41.129 { 00:10:41.129 "dma_device_id": "system", 00:10:41.129 "dma_device_type": 1 00:10:41.129 }, 00:10:41.129 { 00:10:41.129 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:41.129 "dma_device_type": 2 00:10:41.129 } 00:10:41.129 ], 00:10:41.129 "driver_specific": {} 00:10:41.129 } 00:10:41.129 ] 00:10:41.129 21:37:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.129 21:37:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:41.129 21:37:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:41.129 21:37:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:41.129 21:37:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:41.129 21:37:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:41.129 21:37:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:41.129 21:37:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:41.129 21:37:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:41.129 21:37:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:41.129 21:37:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:41.129 21:37:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:41.129 21:37:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:41.129 21:37:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:41.129 21:37:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.129 21:37:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.129 21:37:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.129 21:37:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:41.129 "name": "Existed_Raid", 00:10:41.129 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:41.129 "strip_size_kb": 64, 00:10:41.129 "state": "configuring", 00:10:41.129 "raid_level": "concat", 00:10:41.129 "superblock": false, 00:10:41.129 "num_base_bdevs": 3, 00:10:41.129 "num_base_bdevs_discovered": 1, 00:10:41.129 "num_base_bdevs_operational": 3, 00:10:41.129 "base_bdevs_list": [ 00:10:41.129 { 00:10:41.129 "name": "BaseBdev1", 00:10:41.129 "uuid": "68f4a771-ccba-43b2-895e-f74fd0c3e4e5", 00:10:41.129 "is_configured": true, 00:10:41.129 "data_offset": 0, 00:10:41.129 "data_size": 65536 00:10:41.129 }, 00:10:41.129 { 00:10:41.129 "name": "BaseBdev2", 00:10:41.129 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:41.129 "is_configured": false, 00:10:41.129 "data_offset": 0, 00:10:41.129 "data_size": 0 00:10:41.129 }, 00:10:41.129 { 00:10:41.129 "name": "BaseBdev3", 00:10:41.129 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:41.129 "is_configured": false, 00:10:41.129 "data_offset": 0, 00:10:41.129 "data_size": 0 00:10:41.129 } 00:10:41.129 ] 00:10:41.129 }' 00:10:41.129 21:37:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:41.129 21:37:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.695 21:37:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:41.695 21:37:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.695 21:37:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.695 [2024-12-10 21:37:42.174200] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:41.695 [2024-12-10 21:37:42.174309] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:41.695 21:37:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.695 21:37:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:41.695 21:37:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.695 21:37:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.695 [2024-12-10 21:37:42.186229] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:41.695 [2024-12-10 21:37:42.188191] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:41.695 [2024-12-10 21:37:42.188234] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:41.695 [2024-12-10 21:37:42.188245] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:41.695 [2024-12-10 21:37:42.188255] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:41.695 21:37:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.695 21:37:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:41.695 21:37:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:41.695 21:37:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:41.695 21:37:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:41.695 21:37:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:41.695 21:37:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:41.695 21:37:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:41.695 21:37:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:41.695 21:37:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:41.695 21:37:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:41.695 21:37:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:41.695 21:37:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:41.695 21:37:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:41.695 21:37:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:41.695 21:37:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.695 21:37:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.695 21:37:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.695 21:37:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:41.695 "name": "Existed_Raid", 00:10:41.695 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:41.695 "strip_size_kb": 64, 00:10:41.695 "state": "configuring", 00:10:41.695 "raid_level": "concat", 00:10:41.695 "superblock": false, 00:10:41.695 "num_base_bdevs": 3, 00:10:41.695 "num_base_bdevs_discovered": 1, 00:10:41.695 "num_base_bdevs_operational": 3, 00:10:41.695 "base_bdevs_list": [ 00:10:41.695 { 00:10:41.695 "name": "BaseBdev1", 00:10:41.695 "uuid": "68f4a771-ccba-43b2-895e-f74fd0c3e4e5", 00:10:41.695 "is_configured": true, 00:10:41.695 "data_offset": 0, 00:10:41.695 "data_size": 65536 00:10:41.695 }, 00:10:41.695 { 00:10:41.695 "name": "BaseBdev2", 00:10:41.695 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:41.695 "is_configured": false, 00:10:41.695 "data_offset": 0, 00:10:41.695 "data_size": 0 00:10:41.695 }, 00:10:41.695 { 00:10:41.695 "name": "BaseBdev3", 00:10:41.695 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:41.695 "is_configured": false, 00:10:41.695 "data_offset": 0, 00:10:41.695 "data_size": 0 00:10:41.695 } 00:10:41.695 ] 00:10:41.695 }' 00:10:41.695 21:37:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:41.695 21:37:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.953 21:37:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:41.953 21:37:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.953 21:37:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.953 [2024-12-10 21:37:42.716630] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:41.953 BaseBdev2 00:10:41.953 21:37:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.953 21:37:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:41.953 21:37:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:41.953 21:37:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:41.953 21:37:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:41.953 21:37:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:41.953 21:37:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:41.953 21:37:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:41.953 21:37:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.953 21:37:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.953 21:37:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.953 21:37:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:41.953 21:37:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.953 21:37:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.211 [ 00:10:42.211 { 00:10:42.211 "name": "BaseBdev2", 00:10:42.211 "aliases": [ 00:10:42.211 "b062766b-4245-46f0-87e4-25a753b03655" 00:10:42.211 ], 00:10:42.211 "product_name": "Malloc disk", 00:10:42.211 "block_size": 512, 00:10:42.211 "num_blocks": 65536, 00:10:42.211 "uuid": "b062766b-4245-46f0-87e4-25a753b03655", 00:10:42.211 "assigned_rate_limits": { 00:10:42.211 "rw_ios_per_sec": 0, 00:10:42.211 "rw_mbytes_per_sec": 0, 00:10:42.211 "r_mbytes_per_sec": 0, 00:10:42.211 "w_mbytes_per_sec": 0 00:10:42.211 }, 00:10:42.211 "claimed": true, 00:10:42.211 "claim_type": "exclusive_write", 00:10:42.211 "zoned": false, 00:10:42.211 "supported_io_types": { 00:10:42.211 "read": true, 00:10:42.211 "write": true, 00:10:42.211 "unmap": true, 00:10:42.211 "flush": true, 00:10:42.211 "reset": true, 00:10:42.211 "nvme_admin": false, 00:10:42.211 "nvme_io": false, 00:10:42.211 "nvme_io_md": false, 00:10:42.211 "write_zeroes": true, 00:10:42.211 "zcopy": true, 00:10:42.211 "get_zone_info": false, 00:10:42.211 "zone_management": false, 00:10:42.211 "zone_append": false, 00:10:42.211 "compare": false, 00:10:42.211 "compare_and_write": false, 00:10:42.211 "abort": true, 00:10:42.211 "seek_hole": false, 00:10:42.211 "seek_data": false, 00:10:42.211 "copy": true, 00:10:42.211 "nvme_iov_md": false 00:10:42.211 }, 00:10:42.211 "memory_domains": [ 00:10:42.211 { 00:10:42.211 "dma_device_id": "system", 00:10:42.211 "dma_device_type": 1 00:10:42.211 }, 00:10:42.211 { 00:10:42.211 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:42.211 "dma_device_type": 2 00:10:42.211 } 00:10:42.211 ], 00:10:42.211 "driver_specific": {} 00:10:42.211 } 00:10:42.211 ] 00:10:42.211 21:37:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.211 21:37:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:42.211 21:37:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:42.211 21:37:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:42.211 21:37:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:42.211 21:37:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:42.211 21:37:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:42.211 21:37:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:42.211 21:37:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:42.211 21:37:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:42.211 21:37:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:42.211 21:37:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:42.211 21:37:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:42.211 21:37:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:42.211 21:37:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:42.211 21:37:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:42.211 21:37:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.211 21:37:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.211 21:37:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.211 21:37:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:42.211 "name": "Existed_Raid", 00:10:42.211 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:42.211 "strip_size_kb": 64, 00:10:42.211 "state": "configuring", 00:10:42.211 "raid_level": "concat", 00:10:42.211 "superblock": false, 00:10:42.211 "num_base_bdevs": 3, 00:10:42.211 "num_base_bdevs_discovered": 2, 00:10:42.211 "num_base_bdevs_operational": 3, 00:10:42.211 "base_bdevs_list": [ 00:10:42.211 { 00:10:42.211 "name": "BaseBdev1", 00:10:42.211 "uuid": "68f4a771-ccba-43b2-895e-f74fd0c3e4e5", 00:10:42.211 "is_configured": true, 00:10:42.211 "data_offset": 0, 00:10:42.211 "data_size": 65536 00:10:42.211 }, 00:10:42.211 { 00:10:42.211 "name": "BaseBdev2", 00:10:42.211 "uuid": "b062766b-4245-46f0-87e4-25a753b03655", 00:10:42.211 "is_configured": true, 00:10:42.211 "data_offset": 0, 00:10:42.211 "data_size": 65536 00:10:42.211 }, 00:10:42.211 { 00:10:42.211 "name": "BaseBdev3", 00:10:42.211 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:42.211 "is_configured": false, 00:10:42.211 "data_offset": 0, 00:10:42.211 "data_size": 0 00:10:42.211 } 00:10:42.211 ] 00:10:42.211 }' 00:10:42.211 21:37:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:42.211 21:37:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.468 21:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:42.468 21:37:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.468 21:37:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.726 [2024-12-10 21:37:43.257512] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:42.726 [2024-12-10 21:37:43.257639] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:42.726 [2024-12-10 21:37:43.257668] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:10:42.726 [2024-12-10 21:37:43.257995] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:42.726 [2024-12-10 21:37:43.258214] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:42.726 [2024-12-10 21:37:43.258279] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:42.726 [2024-12-10 21:37:43.258640] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:42.726 BaseBdev3 00:10:42.726 21:37:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.726 21:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:42.726 21:37:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:42.726 21:37:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:42.726 21:37:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:42.726 21:37:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:42.726 21:37:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:42.726 21:37:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:42.726 21:37:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.726 21:37:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.726 21:37:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.726 21:37:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:42.726 21:37:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.726 21:37:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.726 [ 00:10:42.726 { 00:10:42.726 "name": "BaseBdev3", 00:10:42.726 "aliases": [ 00:10:42.726 "41534417-e8a5-4c23-b241-e6a8ac9edd83" 00:10:42.726 ], 00:10:42.726 "product_name": "Malloc disk", 00:10:42.726 "block_size": 512, 00:10:42.726 "num_blocks": 65536, 00:10:42.726 "uuid": "41534417-e8a5-4c23-b241-e6a8ac9edd83", 00:10:42.726 "assigned_rate_limits": { 00:10:42.726 "rw_ios_per_sec": 0, 00:10:42.726 "rw_mbytes_per_sec": 0, 00:10:42.726 "r_mbytes_per_sec": 0, 00:10:42.726 "w_mbytes_per_sec": 0 00:10:42.726 }, 00:10:42.726 "claimed": true, 00:10:42.726 "claim_type": "exclusive_write", 00:10:42.726 "zoned": false, 00:10:42.726 "supported_io_types": { 00:10:42.726 "read": true, 00:10:42.726 "write": true, 00:10:42.726 "unmap": true, 00:10:42.726 "flush": true, 00:10:42.726 "reset": true, 00:10:42.726 "nvme_admin": false, 00:10:42.726 "nvme_io": false, 00:10:42.726 "nvme_io_md": false, 00:10:42.726 "write_zeroes": true, 00:10:42.726 "zcopy": true, 00:10:42.726 "get_zone_info": false, 00:10:42.726 "zone_management": false, 00:10:42.727 "zone_append": false, 00:10:42.727 "compare": false, 00:10:42.727 "compare_and_write": false, 00:10:42.727 "abort": true, 00:10:42.727 "seek_hole": false, 00:10:42.727 "seek_data": false, 00:10:42.727 "copy": true, 00:10:42.727 "nvme_iov_md": false 00:10:42.727 }, 00:10:42.727 "memory_domains": [ 00:10:42.727 { 00:10:42.727 "dma_device_id": "system", 00:10:42.727 "dma_device_type": 1 00:10:42.727 }, 00:10:42.727 { 00:10:42.727 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:42.727 "dma_device_type": 2 00:10:42.727 } 00:10:42.727 ], 00:10:42.727 "driver_specific": {} 00:10:42.727 } 00:10:42.727 ] 00:10:42.727 21:37:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.727 21:37:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:42.727 21:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:42.727 21:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:42.727 21:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:10:42.727 21:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:42.727 21:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:42.727 21:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:42.727 21:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:42.727 21:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:42.727 21:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:42.727 21:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:42.727 21:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:42.727 21:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:42.727 21:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:42.727 21:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:42.727 21:37:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.727 21:37:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.727 21:37:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.727 21:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:42.727 "name": "Existed_Raid", 00:10:42.727 "uuid": "ef417cfb-a208-4a30-8a87-b976eaf94d59", 00:10:42.727 "strip_size_kb": 64, 00:10:42.727 "state": "online", 00:10:42.727 "raid_level": "concat", 00:10:42.727 "superblock": false, 00:10:42.727 "num_base_bdevs": 3, 00:10:42.727 "num_base_bdevs_discovered": 3, 00:10:42.727 "num_base_bdevs_operational": 3, 00:10:42.727 "base_bdevs_list": [ 00:10:42.727 { 00:10:42.727 "name": "BaseBdev1", 00:10:42.727 "uuid": "68f4a771-ccba-43b2-895e-f74fd0c3e4e5", 00:10:42.727 "is_configured": true, 00:10:42.727 "data_offset": 0, 00:10:42.727 "data_size": 65536 00:10:42.727 }, 00:10:42.727 { 00:10:42.727 "name": "BaseBdev2", 00:10:42.727 "uuid": "b062766b-4245-46f0-87e4-25a753b03655", 00:10:42.727 "is_configured": true, 00:10:42.727 "data_offset": 0, 00:10:42.727 "data_size": 65536 00:10:42.727 }, 00:10:42.727 { 00:10:42.727 "name": "BaseBdev3", 00:10:42.727 "uuid": "41534417-e8a5-4c23-b241-e6a8ac9edd83", 00:10:42.727 "is_configured": true, 00:10:42.727 "data_offset": 0, 00:10:42.727 "data_size": 65536 00:10:42.727 } 00:10:42.727 ] 00:10:42.727 }' 00:10:42.727 21:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:42.727 21:37:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.985 21:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:42.985 21:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:42.985 21:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:42.985 21:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:42.985 21:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:42.985 21:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:42.985 21:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:42.985 21:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:42.985 21:37:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.985 21:37:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.985 [2024-12-10 21:37:43.761050] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:43.244 21:37:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.244 21:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:43.244 "name": "Existed_Raid", 00:10:43.244 "aliases": [ 00:10:43.244 "ef417cfb-a208-4a30-8a87-b976eaf94d59" 00:10:43.244 ], 00:10:43.244 "product_name": "Raid Volume", 00:10:43.244 "block_size": 512, 00:10:43.244 "num_blocks": 196608, 00:10:43.244 "uuid": "ef417cfb-a208-4a30-8a87-b976eaf94d59", 00:10:43.244 "assigned_rate_limits": { 00:10:43.244 "rw_ios_per_sec": 0, 00:10:43.244 "rw_mbytes_per_sec": 0, 00:10:43.244 "r_mbytes_per_sec": 0, 00:10:43.244 "w_mbytes_per_sec": 0 00:10:43.244 }, 00:10:43.244 "claimed": false, 00:10:43.244 "zoned": false, 00:10:43.244 "supported_io_types": { 00:10:43.244 "read": true, 00:10:43.244 "write": true, 00:10:43.244 "unmap": true, 00:10:43.244 "flush": true, 00:10:43.244 "reset": true, 00:10:43.244 "nvme_admin": false, 00:10:43.244 "nvme_io": false, 00:10:43.244 "nvme_io_md": false, 00:10:43.244 "write_zeroes": true, 00:10:43.244 "zcopy": false, 00:10:43.244 "get_zone_info": false, 00:10:43.244 "zone_management": false, 00:10:43.244 "zone_append": false, 00:10:43.244 "compare": false, 00:10:43.244 "compare_and_write": false, 00:10:43.244 "abort": false, 00:10:43.244 "seek_hole": false, 00:10:43.244 "seek_data": false, 00:10:43.244 "copy": false, 00:10:43.244 "nvme_iov_md": false 00:10:43.244 }, 00:10:43.244 "memory_domains": [ 00:10:43.244 { 00:10:43.244 "dma_device_id": "system", 00:10:43.244 "dma_device_type": 1 00:10:43.244 }, 00:10:43.244 { 00:10:43.244 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:43.244 "dma_device_type": 2 00:10:43.244 }, 00:10:43.244 { 00:10:43.244 "dma_device_id": "system", 00:10:43.244 "dma_device_type": 1 00:10:43.244 }, 00:10:43.244 { 00:10:43.244 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:43.244 "dma_device_type": 2 00:10:43.244 }, 00:10:43.244 { 00:10:43.244 "dma_device_id": "system", 00:10:43.244 "dma_device_type": 1 00:10:43.244 }, 00:10:43.244 { 00:10:43.244 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:43.244 "dma_device_type": 2 00:10:43.244 } 00:10:43.244 ], 00:10:43.244 "driver_specific": { 00:10:43.244 "raid": { 00:10:43.244 "uuid": "ef417cfb-a208-4a30-8a87-b976eaf94d59", 00:10:43.244 "strip_size_kb": 64, 00:10:43.244 "state": "online", 00:10:43.244 "raid_level": "concat", 00:10:43.244 "superblock": false, 00:10:43.244 "num_base_bdevs": 3, 00:10:43.244 "num_base_bdevs_discovered": 3, 00:10:43.244 "num_base_bdevs_operational": 3, 00:10:43.244 "base_bdevs_list": [ 00:10:43.244 { 00:10:43.244 "name": "BaseBdev1", 00:10:43.244 "uuid": "68f4a771-ccba-43b2-895e-f74fd0c3e4e5", 00:10:43.244 "is_configured": true, 00:10:43.244 "data_offset": 0, 00:10:43.244 "data_size": 65536 00:10:43.244 }, 00:10:43.244 { 00:10:43.244 "name": "BaseBdev2", 00:10:43.244 "uuid": "b062766b-4245-46f0-87e4-25a753b03655", 00:10:43.244 "is_configured": true, 00:10:43.244 "data_offset": 0, 00:10:43.244 "data_size": 65536 00:10:43.244 }, 00:10:43.244 { 00:10:43.244 "name": "BaseBdev3", 00:10:43.244 "uuid": "41534417-e8a5-4c23-b241-e6a8ac9edd83", 00:10:43.244 "is_configured": true, 00:10:43.244 "data_offset": 0, 00:10:43.244 "data_size": 65536 00:10:43.244 } 00:10:43.244 ] 00:10:43.244 } 00:10:43.244 } 00:10:43.244 }' 00:10:43.244 21:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:43.244 21:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:43.244 BaseBdev2 00:10:43.244 BaseBdev3' 00:10:43.245 21:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:43.245 21:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:43.245 21:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:43.245 21:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:43.245 21:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:43.245 21:37:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.245 21:37:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.245 21:37:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.245 21:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:43.245 21:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:43.245 21:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:43.245 21:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:43.245 21:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:43.245 21:37:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.245 21:37:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.245 21:37:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.245 21:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:43.245 21:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:43.245 21:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:43.245 21:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:43.245 21:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:43.245 21:37:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.245 21:37:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.245 21:37:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.245 21:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:43.245 21:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:43.245 21:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:43.245 21:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.245 21:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.245 [2024-12-10 21:37:44.020356] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:43.245 [2024-12-10 21:37:44.020469] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:43.245 [2024-12-10 21:37:44.020555] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:43.504 21:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.504 21:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:43.504 21:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:10:43.504 21:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:43.504 21:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:43.504 21:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:43.504 21:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:10:43.504 21:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:43.504 21:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:43.504 21:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:43.504 21:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:43.504 21:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:43.504 21:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:43.504 21:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:43.504 21:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:43.504 21:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:43.504 21:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:43.504 21:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.504 21:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:43.504 21:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.504 21:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.504 21:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:43.504 "name": "Existed_Raid", 00:10:43.504 "uuid": "ef417cfb-a208-4a30-8a87-b976eaf94d59", 00:10:43.504 "strip_size_kb": 64, 00:10:43.504 "state": "offline", 00:10:43.504 "raid_level": "concat", 00:10:43.504 "superblock": false, 00:10:43.504 "num_base_bdevs": 3, 00:10:43.504 "num_base_bdevs_discovered": 2, 00:10:43.504 "num_base_bdevs_operational": 2, 00:10:43.504 "base_bdevs_list": [ 00:10:43.504 { 00:10:43.504 "name": null, 00:10:43.504 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:43.504 "is_configured": false, 00:10:43.504 "data_offset": 0, 00:10:43.504 "data_size": 65536 00:10:43.504 }, 00:10:43.504 { 00:10:43.504 "name": "BaseBdev2", 00:10:43.504 "uuid": "b062766b-4245-46f0-87e4-25a753b03655", 00:10:43.504 "is_configured": true, 00:10:43.504 "data_offset": 0, 00:10:43.504 "data_size": 65536 00:10:43.504 }, 00:10:43.504 { 00:10:43.504 "name": "BaseBdev3", 00:10:43.504 "uuid": "41534417-e8a5-4c23-b241-e6a8ac9edd83", 00:10:43.504 "is_configured": true, 00:10:43.504 "data_offset": 0, 00:10:43.504 "data_size": 65536 00:10:43.504 } 00:10:43.504 ] 00:10:43.504 }' 00:10:43.504 21:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:43.504 21:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.073 21:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:44.073 21:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:44.073 21:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:44.073 21:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:44.073 21:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.073 21:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.073 21:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.073 21:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:44.073 21:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:44.073 21:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:44.073 21:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.073 21:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.073 [2024-12-10 21:37:44.634829] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:44.073 21:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.073 21:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:44.073 21:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:44.073 21:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:44.073 21:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:44.073 21:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.073 21:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.073 21:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.073 21:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:44.073 21:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:44.073 21:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:44.073 21:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.073 21:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.073 [2024-12-10 21:37:44.799641] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:44.073 [2024-12-10 21:37:44.799752] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:44.333 21:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.333 21:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:44.333 21:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:44.333 21:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:44.333 21:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:44.333 21:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.333 21:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.333 21:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.333 21:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:44.333 21:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:44.333 21:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:10:44.333 21:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:44.333 21:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:44.333 21:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:44.333 21:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.333 21:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.333 BaseBdev2 00:10:44.333 21:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.333 21:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:44.333 21:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:44.333 21:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:44.333 21:37:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:44.333 21:37:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:44.333 21:37:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:44.333 21:37:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:44.333 21:37:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.333 21:37:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.333 21:37:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.333 21:37:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:44.333 21:37:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.333 21:37:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.333 [ 00:10:44.333 { 00:10:44.333 "name": "BaseBdev2", 00:10:44.333 "aliases": [ 00:10:44.333 "623a0cfd-0f0c-4958-905f-3a043715a841" 00:10:44.333 ], 00:10:44.333 "product_name": "Malloc disk", 00:10:44.333 "block_size": 512, 00:10:44.333 "num_blocks": 65536, 00:10:44.333 "uuid": "623a0cfd-0f0c-4958-905f-3a043715a841", 00:10:44.333 "assigned_rate_limits": { 00:10:44.333 "rw_ios_per_sec": 0, 00:10:44.333 "rw_mbytes_per_sec": 0, 00:10:44.333 "r_mbytes_per_sec": 0, 00:10:44.333 "w_mbytes_per_sec": 0 00:10:44.333 }, 00:10:44.333 "claimed": false, 00:10:44.333 "zoned": false, 00:10:44.333 "supported_io_types": { 00:10:44.333 "read": true, 00:10:44.333 "write": true, 00:10:44.333 "unmap": true, 00:10:44.333 "flush": true, 00:10:44.333 "reset": true, 00:10:44.333 "nvme_admin": false, 00:10:44.333 "nvme_io": false, 00:10:44.333 "nvme_io_md": false, 00:10:44.333 "write_zeroes": true, 00:10:44.333 "zcopy": true, 00:10:44.333 "get_zone_info": false, 00:10:44.333 "zone_management": false, 00:10:44.333 "zone_append": false, 00:10:44.333 "compare": false, 00:10:44.333 "compare_and_write": false, 00:10:44.333 "abort": true, 00:10:44.333 "seek_hole": false, 00:10:44.333 "seek_data": false, 00:10:44.333 "copy": true, 00:10:44.333 "nvme_iov_md": false 00:10:44.333 }, 00:10:44.333 "memory_domains": [ 00:10:44.333 { 00:10:44.333 "dma_device_id": "system", 00:10:44.333 "dma_device_type": 1 00:10:44.333 }, 00:10:44.333 { 00:10:44.333 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:44.333 "dma_device_type": 2 00:10:44.333 } 00:10:44.333 ], 00:10:44.333 "driver_specific": {} 00:10:44.333 } 00:10:44.333 ] 00:10:44.333 21:37:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.333 21:37:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:44.333 21:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:44.333 21:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:44.333 21:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:44.333 21:37:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.333 21:37:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.333 BaseBdev3 00:10:44.333 21:37:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.333 21:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:44.333 21:37:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:44.333 21:37:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:44.333 21:37:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:44.333 21:37:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:44.333 21:37:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:44.333 21:37:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:44.333 21:37:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.333 21:37:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.333 21:37:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.333 21:37:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:44.333 21:37:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.333 21:37:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.333 [ 00:10:44.333 { 00:10:44.333 "name": "BaseBdev3", 00:10:44.333 "aliases": [ 00:10:44.333 "c2a42251-10f2-4895-b1cb-ec3b8c10274c" 00:10:44.333 ], 00:10:44.333 "product_name": "Malloc disk", 00:10:44.333 "block_size": 512, 00:10:44.333 "num_blocks": 65536, 00:10:44.333 "uuid": "c2a42251-10f2-4895-b1cb-ec3b8c10274c", 00:10:44.333 "assigned_rate_limits": { 00:10:44.333 "rw_ios_per_sec": 0, 00:10:44.333 "rw_mbytes_per_sec": 0, 00:10:44.333 "r_mbytes_per_sec": 0, 00:10:44.333 "w_mbytes_per_sec": 0 00:10:44.333 }, 00:10:44.333 "claimed": false, 00:10:44.333 "zoned": false, 00:10:44.333 "supported_io_types": { 00:10:44.333 "read": true, 00:10:44.333 "write": true, 00:10:44.333 "unmap": true, 00:10:44.333 "flush": true, 00:10:44.333 "reset": true, 00:10:44.333 "nvme_admin": false, 00:10:44.333 "nvme_io": false, 00:10:44.333 "nvme_io_md": false, 00:10:44.333 "write_zeroes": true, 00:10:44.333 "zcopy": true, 00:10:44.333 "get_zone_info": false, 00:10:44.593 "zone_management": false, 00:10:44.593 "zone_append": false, 00:10:44.593 "compare": false, 00:10:44.593 "compare_and_write": false, 00:10:44.593 "abort": true, 00:10:44.593 "seek_hole": false, 00:10:44.593 "seek_data": false, 00:10:44.593 "copy": true, 00:10:44.593 "nvme_iov_md": false 00:10:44.593 }, 00:10:44.593 "memory_domains": [ 00:10:44.593 { 00:10:44.593 "dma_device_id": "system", 00:10:44.593 "dma_device_type": 1 00:10:44.593 }, 00:10:44.593 { 00:10:44.593 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:44.593 "dma_device_type": 2 00:10:44.593 } 00:10:44.593 ], 00:10:44.593 "driver_specific": {} 00:10:44.593 } 00:10:44.593 ] 00:10:44.593 21:37:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.593 21:37:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:44.593 21:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:44.593 21:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:44.593 21:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:44.593 21:37:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.593 21:37:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.593 [2024-12-10 21:37:45.126470] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:44.593 [2024-12-10 21:37:45.126582] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:44.593 [2024-12-10 21:37:45.126639] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:44.593 [2024-12-10 21:37:45.128773] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:44.593 21:37:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.593 21:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:44.593 21:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:44.593 21:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:44.593 21:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:44.593 21:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:44.593 21:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:44.593 21:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:44.593 21:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:44.593 21:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:44.593 21:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:44.593 21:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:44.593 21:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:44.593 21:37:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.593 21:37:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.593 21:37:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.593 21:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:44.593 "name": "Existed_Raid", 00:10:44.593 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:44.593 "strip_size_kb": 64, 00:10:44.593 "state": "configuring", 00:10:44.593 "raid_level": "concat", 00:10:44.593 "superblock": false, 00:10:44.593 "num_base_bdevs": 3, 00:10:44.593 "num_base_bdevs_discovered": 2, 00:10:44.593 "num_base_bdevs_operational": 3, 00:10:44.593 "base_bdevs_list": [ 00:10:44.593 { 00:10:44.593 "name": "BaseBdev1", 00:10:44.593 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:44.593 "is_configured": false, 00:10:44.593 "data_offset": 0, 00:10:44.593 "data_size": 0 00:10:44.593 }, 00:10:44.593 { 00:10:44.593 "name": "BaseBdev2", 00:10:44.593 "uuid": "623a0cfd-0f0c-4958-905f-3a043715a841", 00:10:44.593 "is_configured": true, 00:10:44.593 "data_offset": 0, 00:10:44.593 "data_size": 65536 00:10:44.593 }, 00:10:44.593 { 00:10:44.593 "name": "BaseBdev3", 00:10:44.593 "uuid": "c2a42251-10f2-4895-b1cb-ec3b8c10274c", 00:10:44.593 "is_configured": true, 00:10:44.593 "data_offset": 0, 00:10:44.593 "data_size": 65536 00:10:44.593 } 00:10:44.593 ] 00:10:44.593 }' 00:10:44.593 21:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:44.593 21:37:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.852 21:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:44.852 21:37:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.852 21:37:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.852 [2024-12-10 21:37:45.593677] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:44.852 21:37:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.852 21:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:44.852 21:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:44.852 21:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:44.852 21:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:44.852 21:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:44.852 21:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:44.852 21:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:44.852 21:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:44.852 21:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:44.852 21:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:44.852 21:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:44.852 21:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:44.852 21:37:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.852 21:37:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.852 21:37:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.110 21:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:45.110 "name": "Existed_Raid", 00:10:45.110 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:45.110 "strip_size_kb": 64, 00:10:45.110 "state": "configuring", 00:10:45.110 "raid_level": "concat", 00:10:45.110 "superblock": false, 00:10:45.110 "num_base_bdevs": 3, 00:10:45.110 "num_base_bdevs_discovered": 1, 00:10:45.110 "num_base_bdevs_operational": 3, 00:10:45.110 "base_bdevs_list": [ 00:10:45.110 { 00:10:45.110 "name": "BaseBdev1", 00:10:45.110 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:45.110 "is_configured": false, 00:10:45.111 "data_offset": 0, 00:10:45.111 "data_size": 0 00:10:45.111 }, 00:10:45.111 { 00:10:45.111 "name": null, 00:10:45.111 "uuid": "623a0cfd-0f0c-4958-905f-3a043715a841", 00:10:45.111 "is_configured": false, 00:10:45.111 "data_offset": 0, 00:10:45.111 "data_size": 65536 00:10:45.111 }, 00:10:45.111 { 00:10:45.111 "name": "BaseBdev3", 00:10:45.111 "uuid": "c2a42251-10f2-4895-b1cb-ec3b8c10274c", 00:10:45.111 "is_configured": true, 00:10:45.111 "data_offset": 0, 00:10:45.111 "data_size": 65536 00:10:45.111 } 00:10:45.111 ] 00:10:45.111 }' 00:10:45.111 21:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:45.111 21:37:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.371 21:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:45.371 21:37:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.371 21:37:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.371 21:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:45.371 21:37:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.371 21:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:45.371 21:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:45.371 21:37:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.371 21:37:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.371 [2024-12-10 21:37:46.134579] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:45.371 BaseBdev1 00:10:45.371 21:37:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.371 21:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:45.371 21:37:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:45.371 21:37:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:45.371 21:37:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:45.371 21:37:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:45.371 21:37:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:45.371 21:37:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:45.371 21:37:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.371 21:37:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.371 21:37:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.371 21:37:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:45.371 21:37:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.371 21:37:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.631 [ 00:10:45.631 { 00:10:45.631 "name": "BaseBdev1", 00:10:45.631 "aliases": [ 00:10:45.631 "a28eed67-6389-4723-832d-27f38d8469c2" 00:10:45.631 ], 00:10:45.631 "product_name": "Malloc disk", 00:10:45.631 "block_size": 512, 00:10:45.631 "num_blocks": 65536, 00:10:45.631 "uuid": "a28eed67-6389-4723-832d-27f38d8469c2", 00:10:45.631 "assigned_rate_limits": { 00:10:45.631 "rw_ios_per_sec": 0, 00:10:45.631 "rw_mbytes_per_sec": 0, 00:10:45.631 "r_mbytes_per_sec": 0, 00:10:45.631 "w_mbytes_per_sec": 0 00:10:45.631 }, 00:10:45.631 "claimed": true, 00:10:45.631 "claim_type": "exclusive_write", 00:10:45.631 "zoned": false, 00:10:45.631 "supported_io_types": { 00:10:45.631 "read": true, 00:10:45.631 "write": true, 00:10:45.631 "unmap": true, 00:10:45.631 "flush": true, 00:10:45.631 "reset": true, 00:10:45.631 "nvme_admin": false, 00:10:45.631 "nvme_io": false, 00:10:45.631 "nvme_io_md": false, 00:10:45.631 "write_zeroes": true, 00:10:45.631 "zcopy": true, 00:10:45.631 "get_zone_info": false, 00:10:45.631 "zone_management": false, 00:10:45.631 "zone_append": false, 00:10:45.631 "compare": false, 00:10:45.631 "compare_and_write": false, 00:10:45.631 "abort": true, 00:10:45.631 "seek_hole": false, 00:10:45.631 "seek_data": false, 00:10:45.631 "copy": true, 00:10:45.631 "nvme_iov_md": false 00:10:45.631 }, 00:10:45.631 "memory_domains": [ 00:10:45.631 { 00:10:45.631 "dma_device_id": "system", 00:10:45.631 "dma_device_type": 1 00:10:45.631 }, 00:10:45.631 { 00:10:45.631 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:45.631 "dma_device_type": 2 00:10:45.631 } 00:10:45.631 ], 00:10:45.631 "driver_specific": {} 00:10:45.631 } 00:10:45.631 ] 00:10:45.631 21:37:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.631 21:37:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:45.631 21:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:45.631 21:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:45.631 21:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:45.631 21:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:45.631 21:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:45.631 21:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:45.631 21:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:45.631 21:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:45.631 21:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:45.631 21:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:45.631 21:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:45.631 21:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:45.631 21:37:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.631 21:37:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.631 21:37:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.631 21:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:45.631 "name": "Existed_Raid", 00:10:45.631 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:45.631 "strip_size_kb": 64, 00:10:45.631 "state": "configuring", 00:10:45.631 "raid_level": "concat", 00:10:45.631 "superblock": false, 00:10:45.631 "num_base_bdevs": 3, 00:10:45.631 "num_base_bdevs_discovered": 2, 00:10:45.631 "num_base_bdevs_operational": 3, 00:10:45.631 "base_bdevs_list": [ 00:10:45.631 { 00:10:45.631 "name": "BaseBdev1", 00:10:45.631 "uuid": "a28eed67-6389-4723-832d-27f38d8469c2", 00:10:45.631 "is_configured": true, 00:10:45.631 "data_offset": 0, 00:10:45.631 "data_size": 65536 00:10:45.631 }, 00:10:45.631 { 00:10:45.631 "name": null, 00:10:45.631 "uuid": "623a0cfd-0f0c-4958-905f-3a043715a841", 00:10:45.631 "is_configured": false, 00:10:45.631 "data_offset": 0, 00:10:45.631 "data_size": 65536 00:10:45.631 }, 00:10:45.631 { 00:10:45.631 "name": "BaseBdev3", 00:10:45.631 "uuid": "c2a42251-10f2-4895-b1cb-ec3b8c10274c", 00:10:45.631 "is_configured": true, 00:10:45.631 "data_offset": 0, 00:10:45.631 "data_size": 65536 00:10:45.631 } 00:10:45.631 ] 00:10:45.631 }' 00:10:45.631 21:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:45.631 21:37:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.894 21:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:45.894 21:37:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.894 21:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:45.894 21:37:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.894 21:37:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.894 21:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:45.894 21:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:45.894 21:37:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.894 21:37:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.894 [2024-12-10 21:37:46.637807] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:45.894 21:37:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.894 21:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:45.894 21:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:45.894 21:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:45.894 21:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:45.894 21:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:45.894 21:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:45.894 21:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:45.894 21:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:45.894 21:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:45.894 21:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:45.894 21:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:45.894 21:37:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.894 21:37:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.895 21:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:45.895 21:37:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.159 21:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:46.159 "name": "Existed_Raid", 00:10:46.159 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:46.159 "strip_size_kb": 64, 00:10:46.159 "state": "configuring", 00:10:46.159 "raid_level": "concat", 00:10:46.159 "superblock": false, 00:10:46.159 "num_base_bdevs": 3, 00:10:46.159 "num_base_bdevs_discovered": 1, 00:10:46.159 "num_base_bdevs_operational": 3, 00:10:46.159 "base_bdevs_list": [ 00:10:46.159 { 00:10:46.159 "name": "BaseBdev1", 00:10:46.159 "uuid": "a28eed67-6389-4723-832d-27f38d8469c2", 00:10:46.159 "is_configured": true, 00:10:46.159 "data_offset": 0, 00:10:46.159 "data_size": 65536 00:10:46.159 }, 00:10:46.159 { 00:10:46.159 "name": null, 00:10:46.159 "uuid": "623a0cfd-0f0c-4958-905f-3a043715a841", 00:10:46.159 "is_configured": false, 00:10:46.159 "data_offset": 0, 00:10:46.159 "data_size": 65536 00:10:46.159 }, 00:10:46.159 { 00:10:46.159 "name": null, 00:10:46.159 "uuid": "c2a42251-10f2-4895-b1cb-ec3b8c10274c", 00:10:46.159 "is_configured": false, 00:10:46.159 "data_offset": 0, 00:10:46.159 "data_size": 65536 00:10:46.159 } 00:10:46.159 ] 00:10:46.159 }' 00:10:46.159 21:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:46.159 21:37:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.419 21:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:46.419 21:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:46.419 21:37:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.419 21:37:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.419 21:37:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.419 21:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:46.419 21:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:46.419 21:37:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.419 21:37:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.419 [2024-12-10 21:37:47.125005] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:46.419 21:37:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.419 21:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:46.419 21:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:46.419 21:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:46.419 21:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:46.419 21:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:46.419 21:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:46.419 21:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:46.419 21:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:46.419 21:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:46.419 21:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:46.419 21:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:46.419 21:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:46.419 21:37:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.419 21:37:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.419 21:37:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.419 21:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:46.419 "name": "Existed_Raid", 00:10:46.419 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:46.419 "strip_size_kb": 64, 00:10:46.419 "state": "configuring", 00:10:46.419 "raid_level": "concat", 00:10:46.419 "superblock": false, 00:10:46.419 "num_base_bdevs": 3, 00:10:46.419 "num_base_bdevs_discovered": 2, 00:10:46.419 "num_base_bdevs_operational": 3, 00:10:46.419 "base_bdevs_list": [ 00:10:46.419 { 00:10:46.419 "name": "BaseBdev1", 00:10:46.419 "uuid": "a28eed67-6389-4723-832d-27f38d8469c2", 00:10:46.419 "is_configured": true, 00:10:46.419 "data_offset": 0, 00:10:46.419 "data_size": 65536 00:10:46.419 }, 00:10:46.419 { 00:10:46.419 "name": null, 00:10:46.419 "uuid": "623a0cfd-0f0c-4958-905f-3a043715a841", 00:10:46.419 "is_configured": false, 00:10:46.419 "data_offset": 0, 00:10:46.419 "data_size": 65536 00:10:46.419 }, 00:10:46.419 { 00:10:46.419 "name": "BaseBdev3", 00:10:46.419 "uuid": "c2a42251-10f2-4895-b1cb-ec3b8c10274c", 00:10:46.419 "is_configured": true, 00:10:46.419 "data_offset": 0, 00:10:46.419 "data_size": 65536 00:10:46.419 } 00:10:46.419 ] 00:10:46.419 }' 00:10:46.419 21:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:46.419 21:37:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.989 21:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:46.989 21:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:46.989 21:37:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.989 21:37:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.989 21:37:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.989 21:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:46.989 21:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:46.989 21:37:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.989 21:37:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.989 [2024-12-10 21:37:47.636170] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:46.989 21:37:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.989 21:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:46.989 21:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:46.989 21:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:46.989 21:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:46.989 21:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:46.989 21:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:46.989 21:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:46.989 21:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:46.989 21:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:46.989 21:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:46.989 21:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:46.989 21:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:46.989 21:37:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.989 21:37:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.250 21:37:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.250 21:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:47.250 "name": "Existed_Raid", 00:10:47.250 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:47.250 "strip_size_kb": 64, 00:10:47.250 "state": "configuring", 00:10:47.250 "raid_level": "concat", 00:10:47.250 "superblock": false, 00:10:47.250 "num_base_bdevs": 3, 00:10:47.250 "num_base_bdevs_discovered": 1, 00:10:47.250 "num_base_bdevs_operational": 3, 00:10:47.250 "base_bdevs_list": [ 00:10:47.250 { 00:10:47.250 "name": null, 00:10:47.250 "uuid": "a28eed67-6389-4723-832d-27f38d8469c2", 00:10:47.250 "is_configured": false, 00:10:47.250 "data_offset": 0, 00:10:47.250 "data_size": 65536 00:10:47.250 }, 00:10:47.250 { 00:10:47.250 "name": null, 00:10:47.250 "uuid": "623a0cfd-0f0c-4958-905f-3a043715a841", 00:10:47.250 "is_configured": false, 00:10:47.250 "data_offset": 0, 00:10:47.250 "data_size": 65536 00:10:47.250 }, 00:10:47.250 { 00:10:47.250 "name": "BaseBdev3", 00:10:47.250 "uuid": "c2a42251-10f2-4895-b1cb-ec3b8c10274c", 00:10:47.250 "is_configured": true, 00:10:47.250 "data_offset": 0, 00:10:47.250 "data_size": 65536 00:10:47.250 } 00:10:47.250 ] 00:10:47.250 }' 00:10:47.250 21:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:47.250 21:37:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.509 21:37:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:47.509 21:37:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:47.509 21:37:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.509 21:37:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.509 21:37:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.509 21:37:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:47.509 21:37:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:47.509 21:37:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.509 21:37:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.509 [2024-12-10 21:37:48.259759] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:47.509 21:37:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.509 21:37:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:47.509 21:37:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:47.509 21:37:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:47.509 21:37:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:47.509 21:37:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:47.509 21:37:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:47.509 21:37:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:47.509 21:37:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:47.509 21:37:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:47.509 21:37:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:47.509 21:37:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:47.509 21:37:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:47.509 21:37:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.509 21:37:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.768 21:37:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.768 21:37:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:47.768 "name": "Existed_Raid", 00:10:47.768 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:47.768 "strip_size_kb": 64, 00:10:47.768 "state": "configuring", 00:10:47.768 "raid_level": "concat", 00:10:47.768 "superblock": false, 00:10:47.768 "num_base_bdevs": 3, 00:10:47.768 "num_base_bdevs_discovered": 2, 00:10:47.768 "num_base_bdevs_operational": 3, 00:10:47.768 "base_bdevs_list": [ 00:10:47.768 { 00:10:47.768 "name": null, 00:10:47.768 "uuid": "a28eed67-6389-4723-832d-27f38d8469c2", 00:10:47.768 "is_configured": false, 00:10:47.768 "data_offset": 0, 00:10:47.768 "data_size": 65536 00:10:47.768 }, 00:10:47.768 { 00:10:47.768 "name": "BaseBdev2", 00:10:47.768 "uuid": "623a0cfd-0f0c-4958-905f-3a043715a841", 00:10:47.768 "is_configured": true, 00:10:47.768 "data_offset": 0, 00:10:47.768 "data_size": 65536 00:10:47.768 }, 00:10:47.768 { 00:10:47.768 "name": "BaseBdev3", 00:10:47.768 "uuid": "c2a42251-10f2-4895-b1cb-ec3b8c10274c", 00:10:47.768 "is_configured": true, 00:10:47.768 "data_offset": 0, 00:10:47.768 "data_size": 65536 00:10:47.768 } 00:10:47.768 ] 00:10:47.768 }' 00:10:47.768 21:37:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:47.768 21:37:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.028 21:37:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:48.028 21:37:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.028 21:37:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.028 21:37:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:48.028 21:37:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.028 21:37:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:48.028 21:37:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:48.028 21:37:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.028 21:37:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.028 21:37:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:48.028 21:37:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.287 21:37:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u a28eed67-6389-4723-832d-27f38d8469c2 00:10:48.287 21:37:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.287 21:37:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.287 [2024-12-10 21:37:48.880334] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:48.287 [2024-12-10 21:37:48.880529] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:48.287 [2024-12-10 21:37:48.880564] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:10:48.287 [2024-12-10 21:37:48.880872] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:10:48.287 [2024-12-10 21:37:48.881090] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:48.287 [2024-12-10 21:37:48.881136] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:10:48.287 [2024-12-10 21:37:48.881474] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:48.287 NewBaseBdev 00:10:48.287 21:37:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.287 21:37:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:48.287 21:37:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:10:48.287 21:37:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:48.287 21:37:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:48.287 21:37:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:48.287 21:37:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:48.287 21:37:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:48.287 21:37:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.287 21:37:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.287 21:37:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.287 21:37:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:48.287 21:37:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.287 21:37:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.287 [ 00:10:48.287 { 00:10:48.287 "name": "NewBaseBdev", 00:10:48.287 "aliases": [ 00:10:48.287 "a28eed67-6389-4723-832d-27f38d8469c2" 00:10:48.287 ], 00:10:48.287 "product_name": "Malloc disk", 00:10:48.287 "block_size": 512, 00:10:48.287 "num_blocks": 65536, 00:10:48.287 "uuid": "a28eed67-6389-4723-832d-27f38d8469c2", 00:10:48.287 "assigned_rate_limits": { 00:10:48.287 "rw_ios_per_sec": 0, 00:10:48.287 "rw_mbytes_per_sec": 0, 00:10:48.287 "r_mbytes_per_sec": 0, 00:10:48.287 "w_mbytes_per_sec": 0 00:10:48.287 }, 00:10:48.287 "claimed": true, 00:10:48.287 "claim_type": "exclusive_write", 00:10:48.287 "zoned": false, 00:10:48.287 "supported_io_types": { 00:10:48.287 "read": true, 00:10:48.287 "write": true, 00:10:48.288 "unmap": true, 00:10:48.288 "flush": true, 00:10:48.288 "reset": true, 00:10:48.288 "nvme_admin": false, 00:10:48.288 "nvme_io": false, 00:10:48.288 "nvme_io_md": false, 00:10:48.288 "write_zeroes": true, 00:10:48.288 "zcopy": true, 00:10:48.288 "get_zone_info": false, 00:10:48.288 "zone_management": false, 00:10:48.288 "zone_append": false, 00:10:48.288 "compare": false, 00:10:48.288 "compare_and_write": false, 00:10:48.288 "abort": true, 00:10:48.288 "seek_hole": false, 00:10:48.288 "seek_data": false, 00:10:48.288 "copy": true, 00:10:48.288 "nvme_iov_md": false 00:10:48.288 }, 00:10:48.288 "memory_domains": [ 00:10:48.288 { 00:10:48.288 "dma_device_id": "system", 00:10:48.288 "dma_device_type": 1 00:10:48.288 }, 00:10:48.288 { 00:10:48.288 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:48.288 "dma_device_type": 2 00:10:48.288 } 00:10:48.288 ], 00:10:48.288 "driver_specific": {} 00:10:48.288 } 00:10:48.288 ] 00:10:48.288 21:37:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.288 21:37:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:48.288 21:37:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:10:48.288 21:37:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:48.288 21:37:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:48.288 21:37:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:48.288 21:37:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:48.288 21:37:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:48.288 21:37:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:48.288 21:37:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:48.288 21:37:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:48.288 21:37:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:48.288 21:37:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:48.288 21:37:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.288 21:37:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:48.288 21:37:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.288 21:37:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.288 21:37:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:48.288 "name": "Existed_Raid", 00:10:48.288 "uuid": "8cb242b9-22bc-4474-addb-f72a40b8614e", 00:10:48.288 "strip_size_kb": 64, 00:10:48.288 "state": "online", 00:10:48.288 "raid_level": "concat", 00:10:48.288 "superblock": false, 00:10:48.288 "num_base_bdevs": 3, 00:10:48.288 "num_base_bdevs_discovered": 3, 00:10:48.288 "num_base_bdevs_operational": 3, 00:10:48.288 "base_bdevs_list": [ 00:10:48.288 { 00:10:48.288 "name": "NewBaseBdev", 00:10:48.288 "uuid": "a28eed67-6389-4723-832d-27f38d8469c2", 00:10:48.288 "is_configured": true, 00:10:48.288 "data_offset": 0, 00:10:48.288 "data_size": 65536 00:10:48.288 }, 00:10:48.288 { 00:10:48.288 "name": "BaseBdev2", 00:10:48.288 "uuid": "623a0cfd-0f0c-4958-905f-3a043715a841", 00:10:48.288 "is_configured": true, 00:10:48.288 "data_offset": 0, 00:10:48.288 "data_size": 65536 00:10:48.288 }, 00:10:48.288 { 00:10:48.288 "name": "BaseBdev3", 00:10:48.288 "uuid": "c2a42251-10f2-4895-b1cb-ec3b8c10274c", 00:10:48.288 "is_configured": true, 00:10:48.288 "data_offset": 0, 00:10:48.288 "data_size": 65536 00:10:48.288 } 00:10:48.288 ] 00:10:48.288 }' 00:10:48.288 21:37:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:48.288 21:37:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.856 21:37:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:48.856 21:37:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:48.856 21:37:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:48.856 21:37:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:48.856 21:37:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:48.856 21:37:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:48.856 21:37:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:48.856 21:37:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.856 21:37:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.856 21:37:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:48.856 [2024-12-10 21:37:49.372101] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:48.856 21:37:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.856 21:37:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:48.856 "name": "Existed_Raid", 00:10:48.856 "aliases": [ 00:10:48.856 "8cb242b9-22bc-4474-addb-f72a40b8614e" 00:10:48.856 ], 00:10:48.856 "product_name": "Raid Volume", 00:10:48.856 "block_size": 512, 00:10:48.856 "num_blocks": 196608, 00:10:48.856 "uuid": "8cb242b9-22bc-4474-addb-f72a40b8614e", 00:10:48.856 "assigned_rate_limits": { 00:10:48.856 "rw_ios_per_sec": 0, 00:10:48.856 "rw_mbytes_per_sec": 0, 00:10:48.856 "r_mbytes_per_sec": 0, 00:10:48.856 "w_mbytes_per_sec": 0 00:10:48.856 }, 00:10:48.856 "claimed": false, 00:10:48.856 "zoned": false, 00:10:48.856 "supported_io_types": { 00:10:48.856 "read": true, 00:10:48.856 "write": true, 00:10:48.856 "unmap": true, 00:10:48.856 "flush": true, 00:10:48.856 "reset": true, 00:10:48.856 "nvme_admin": false, 00:10:48.856 "nvme_io": false, 00:10:48.856 "nvme_io_md": false, 00:10:48.856 "write_zeroes": true, 00:10:48.856 "zcopy": false, 00:10:48.856 "get_zone_info": false, 00:10:48.856 "zone_management": false, 00:10:48.856 "zone_append": false, 00:10:48.856 "compare": false, 00:10:48.856 "compare_and_write": false, 00:10:48.856 "abort": false, 00:10:48.856 "seek_hole": false, 00:10:48.856 "seek_data": false, 00:10:48.856 "copy": false, 00:10:48.856 "nvme_iov_md": false 00:10:48.856 }, 00:10:48.856 "memory_domains": [ 00:10:48.856 { 00:10:48.856 "dma_device_id": "system", 00:10:48.856 "dma_device_type": 1 00:10:48.856 }, 00:10:48.856 { 00:10:48.856 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:48.856 "dma_device_type": 2 00:10:48.856 }, 00:10:48.856 { 00:10:48.856 "dma_device_id": "system", 00:10:48.856 "dma_device_type": 1 00:10:48.856 }, 00:10:48.856 { 00:10:48.856 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:48.856 "dma_device_type": 2 00:10:48.856 }, 00:10:48.856 { 00:10:48.856 "dma_device_id": "system", 00:10:48.856 "dma_device_type": 1 00:10:48.856 }, 00:10:48.856 { 00:10:48.856 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:48.856 "dma_device_type": 2 00:10:48.856 } 00:10:48.856 ], 00:10:48.856 "driver_specific": { 00:10:48.856 "raid": { 00:10:48.856 "uuid": "8cb242b9-22bc-4474-addb-f72a40b8614e", 00:10:48.856 "strip_size_kb": 64, 00:10:48.856 "state": "online", 00:10:48.856 "raid_level": "concat", 00:10:48.856 "superblock": false, 00:10:48.856 "num_base_bdevs": 3, 00:10:48.856 "num_base_bdevs_discovered": 3, 00:10:48.856 "num_base_bdevs_operational": 3, 00:10:48.856 "base_bdevs_list": [ 00:10:48.856 { 00:10:48.856 "name": "NewBaseBdev", 00:10:48.856 "uuid": "a28eed67-6389-4723-832d-27f38d8469c2", 00:10:48.856 "is_configured": true, 00:10:48.856 "data_offset": 0, 00:10:48.856 "data_size": 65536 00:10:48.856 }, 00:10:48.856 { 00:10:48.856 "name": "BaseBdev2", 00:10:48.856 "uuid": "623a0cfd-0f0c-4958-905f-3a043715a841", 00:10:48.856 "is_configured": true, 00:10:48.856 "data_offset": 0, 00:10:48.856 "data_size": 65536 00:10:48.856 }, 00:10:48.856 { 00:10:48.856 "name": "BaseBdev3", 00:10:48.856 "uuid": "c2a42251-10f2-4895-b1cb-ec3b8c10274c", 00:10:48.856 "is_configured": true, 00:10:48.856 "data_offset": 0, 00:10:48.856 "data_size": 65536 00:10:48.856 } 00:10:48.856 ] 00:10:48.856 } 00:10:48.856 } 00:10:48.856 }' 00:10:48.856 21:37:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:48.856 21:37:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:48.856 BaseBdev2 00:10:48.856 BaseBdev3' 00:10:48.856 21:37:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:48.856 21:37:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:48.856 21:37:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:48.856 21:37:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:48.856 21:37:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:48.856 21:37:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.856 21:37:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.856 21:37:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.857 21:37:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:48.857 21:37:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:48.857 21:37:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:48.857 21:37:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:48.857 21:37:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.857 21:37:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.857 21:37:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:48.857 21:37:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.857 21:37:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:48.857 21:37:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:48.857 21:37:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:48.857 21:37:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:48.857 21:37:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:48.857 21:37:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.857 21:37:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.116 21:37:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.116 21:37:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:49.116 21:37:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:49.116 21:37:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:49.116 21:37:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.116 21:37:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.116 [2024-12-10 21:37:49.675746] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:49.116 [2024-12-10 21:37:49.675818] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:49.116 [2024-12-10 21:37:49.675925] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:49.116 [2024-12-10 21:37:49.676031] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:49.116 [2024-12-10 21:37:49.676084] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:10:49.116 21:37:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.116 21:37:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 65719 00:10:49.116 21:37:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 65719 ']' 00:10:49.116 21:37:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 65719 00:10:49.116 21:37:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:10:49.116 21:37:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:49.116 21:37:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65719 00:10:49.116 21:37:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:49.116 21:37:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:49.116 21:37:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65719' 00:10:49.116 killing process with pid 65719 00:10:49.116 21:37:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 65719 00:10:49.116 [2024-12-10 21:37:49.724305] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:49.116 21:37:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 65719 00:10:49.375 [2024-12-10 21:37:50.042581] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:50.755 21:37:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:10:50.755 00:10:50.755 real 0m11.096s 00:10:50.755 user 0m17.623s 00:10:50.755 sys 0m1.925s 00:10:50.755 21:37:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:50.755 21:37:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.755 ************************************ 00:10:50.755 END TEST raid_state_function_test 00:10:50.755 ************************************ 00:10:50.755 21:37:51 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 3 true 00:10:50.755 21:37:51 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:50.755 21:37:51 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:50.755 21:37:51 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:50.755 ************************************ 00:10:50.755 START TEST raid_state_function_test_sb 00:10:50.755 ************************************ 00:10:50.755 21:37:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 3 true 00:10:50.755 21:37:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:10:50.755 21:37:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:10:50.755 21:37:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:10:50.755 21:37:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:50.755 21:37:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:50.755 21:37:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:50.755 21:37:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:50.755 21:37:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:50.755 21:37:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:50.756 21:37:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:50.756 21:37:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:50.756 21:37:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:50.756 21:37:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:50.756 21:37:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:50.756 21:37:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:50.756 21:37:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:50.756 21:37:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:50.756 21:37:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:50.756 21:37:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:50.756 21:37:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:50.756 21:37:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:50.756 21:37:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:10:50.756 21:37:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:50.756 21:37:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:50.756 21:37:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:10:50.756 21:37:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:10:50.756 21:37:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=66346 00:10:50.756 21:37:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:50.756 21:37:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 66346' 00:10:50.756 Process raid pid: 66346 00:10:50.756 21:37:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 66346 00:10:50.756 21:37:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 66346 ']' 00:10:50.756 21:37:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:50.756 21:37:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:50.756 21:37:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:50.756 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:50.756 21:37:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:50.756 21:37:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.756 [2024-12-10 21:37:51.385381] Starting SPDK v25.01-pre git sha1 cec5ba284 / DPDK 24.03.0 initialization... 00:10:50.756 [2024-12-10 21:37:51.385599] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:51.014 [2024-12-10 21:37:51.562787] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:51.014 [2024-12-10 21:37:51.695165] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:10:51.272 [2024-12-10 21:37:51.922469] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:51.272 [2024-12-10 21:37:51.922593] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:51.532 21:37:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:51.532 21:37:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:10:51.532 21:37:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:51.532 21:37:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.532 21:37:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.532 [2024-12-10 21:37:52.236018] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:51.532 [2024-12-10 21:37:52.236157] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:51.532 [2024-12-10 21:37:52.236194] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:51.532 [2024-12-10 21:37:52.236222] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:51.532 [2024-12-10 21:37:52.236260] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:51.532 [2024-12-10 21:37:52.236303] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:51.532 21:37:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.532 21:37:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:51.532 21:37:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:51.532 21:37:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:51.532 21:37:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:51.532 21:37:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:51.532 21:37:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:51.532 21:37:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:51.532 21:37:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:51.532 21:37:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:51.532 21:37:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:51.532 21:37:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:51.532 21:37:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:51.532 21:37:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.532 21:37:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.532 21:37:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.532 21:37:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:51.532 "name": "Existed_Raid", 00:10:51.532 "uuid": "d6f568d0-72bd-400b-8e8f-234fcaeea4eb", 00:10:51.532 "strip_size_kb": 64, 00:10:51.532 "state": "configuring", 00:10:51.532 "raid_level": "concat", 00:10:51.532 "superblock": true, 00:10:51.532 "num_base_bdevs": 3, 00:10:51.532 "num_base_bdevs_discovered": 0, 00:10:51.532 "num_base_bdevs_operational": 3, 00:10:51.532 "base_bdevs_list": [ 00:10:51.532 { 00:10:51.532 "name": "BaseBdev1", 00:10:51.532 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:51.532 "is_configured": false, 00:10:51.532 "data_offset": 0, 00:10:51.532 "data_size": 0 00:10:51.532 }, 00:10:51.532 { 00:10:51.532 "name": "BaseBdev2", 00:10:51.532 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:51.532 "is_configured": false, 00:10:51.532 "data_offset": 0, 00:10:51.532 "data_size": 0 00:10:51.532 }, 00:10:51.532 { 00:10:51.532 "name": "BaseBdev3", 00:10:51.532 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:51.532 "is_configured": false, 00:10:51.532 "data_offset": 0, 00:10:51.532 "data_size": 0 00:10:51.532 } 00:10:51.532 ] 00:10:51.532 }' 00:10:51.532 21:37:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:51.532 21:37:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.100 21:37:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:52.100 21:37:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.100 21:37:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.100 [2024-12-10 21:37:52.687172] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:52.100 [2024-12-10 21:37:52.687263] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:52.100 21:37:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.100 21:37:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:52.100 21:37:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.100 21:37:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.100 [2024-12-10 21:37:52.699163] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:52.100 [2024-12-10 21:37:52.699251] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:52.100 [2024-12-10 21:37:52.699283] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:52.100 [2024-12-10 21:37:52.699331] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:52.100 [2024-12-10 21:37:52.699369] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:52.100 [2024-12-10 21:37:52.699403] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:52.100 21:37:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.100 21:37:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:52.100 21:37:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.100 21:37:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.100 [2024-12-10 21:37:52.751241] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:52.100 BaseBdev1 00:10:52.100 21:37:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.100 21:37:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:52.100 21:37:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:52.100 21:37:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:52.100 21:37:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:52.100 21:37:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:52.100 21:37:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:52.100 21:37:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:52.100 21:37:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.100 21:37:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.100 21:37:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.100 21:37:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:52.100 21:37:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.100 21:37:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.100 [ 00:10:52.100 { 00:10:52.100 "name": "BaseBdev1", 00:10:52.100 "aliases": [ 00:10:52.100 "b5635181-cc61-4ebe-ae77-71c3c16ab9ab" 00:10:52.100 ], 00:10:52.100 "product_name": "Malloc disk", 00:10:52.100 "block_size": 512, 00:10:52.100 "num_blocks": 65536, 00:10:52.100 "uuid": "b5635181-cc61-4ebe-ae77-71c3c16ab9ab", 00:10:52.100 "assigned_rate_limits": { 00:10:52.100 "rw_ios_per_sec": 0, 00:10:52.100 "rw_mbytes_per_sec": 0, 00:10:52.100 "r_mbytes_per_sec": 0, 00:10:52.100 "w_mbytes_per_sec": 0 00:10:52.100 }, 00:10:52.100 "claimed": true, 00:10:52.100 "claim_type": "exclusive_write", 00:10:52.100 "zoned": false, 00:10:52.100 "supported_io_types": { 00:10:52.100 "read": true, 00:10:52.100 "write": true, 00:10:52.100 "unmap": true, 00:10:52.100 "flush": true, 00:10:52.100 "reset": true, 00:10:52.100 "nvme_admin": false, 00:10:52.100 "nvme_io": false, 00:10:52.100 "nvme_io_md": false, 00:10:52.100 "write_zeroes": true, 00:10:52.100 "zcopy": true, 00:10:52.100 "get_zone_info": false, 00:10:52.100 "zone_management": false, 00:10:52.100 "zone_append": false, 00:10:52.100 "compare": false, 00:10:52.100 "compare_and_write": false, 00:10:52.100 "abort": true, 00:10:52.100 "seek_hole": false, 00:10:52.100 "seek_data": false, 00:10:52.100 "copy": true, 00:10:52.100 "nvme_iov_md": false 00:10:52.100 }, 00:10:52.100 "memory_domains": [ 00:10:52.100 { 00:10:52.100 "dma_device_id": "system", 00:10:52.100 "dma_device_type": 1 00:10:52.100 }, 00:10:52.100 { 00:10:52.100 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:52.100 "dma_device_type": 2 00:10:52.100 } 00:10:52.100 ], 00:10:52.100 "driver_specific": {} 00:10:52.100 } 00:10:52.100 ] 00:10:52.100 21:37:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.100 21:37:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:52.100 21:37:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:52.101 21:37:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:52.101 21:37:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:52.101 21:37:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:52.101 21:37:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:52.101 21:37:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:52.101 21:37:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:52.101 21:37:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:52.101 21:37:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:52.101 21:37:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:52.101 21:37:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:52.101 21:37:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:52.101 21:37:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.101 21:37:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.101 21:37:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.101 21:37:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:52.101 "name": "Existed_Raid", 00:10:52.101 "uuid": "570d3b38-c8df-463b-b707-cd8935d9a90d", 00:10:52.101 "strip_size_kb": 64, 00:10:52.101 "state": "configuring", 00:10:52.101 "raid_level": "concat", 00:10:52.101 "superblock": true, 00:10:52.101 "num_base_bdevs": 3, 00:10:52.101 "num_base_bdevs_discovered": 1, 00:10:52.101 "num_base_bdevs_operational": 3, 00:10:52.101 "base_bdevs_list": [ 00:10:52.101 { 00:10:52.101 "name": "BaseBdev1", 00:10:52.101 "uuid": "b5635181-cc61-4ebe-ae77-71c3c16ab9ab", 00:10:52.101 "is_configured": true, 00:10:52.101 "data_offset": 2048, 00:10:52.101 "data_size": 63488 00:10:52.101 }, 00:10:52.101 { 00:10:52.101 "name": "BaseBdev2", 00:10:52.101 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:52.101 "is_configured": false, 00:10:52.101 "data_offset": 0, 00:10:52.101 "data_size": 0 00:10:52.101 }, 00:10:52.101 { 00:10:52.101 "name": "BaseBdev3", 00:10:52.101 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:52.101 "is_configured": false, 00:10:52.101 "data_offset": 0, 00:10:52.101 "data_size": 0 00:10:52.101 } 00:10:52.101 ] 00:10:52.101 }' 00:10:52.101 21:37:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:52.101 21:37:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.687 21:37:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:52.687 21:37:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.687 21:37:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.687 [2024-12-10 21:37:53.242481] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:52.687 [2024-12-10 21:37:53.242586] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:52.687 21:37:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.687 21:37:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:52.687 21:37:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.687 21:37:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.687 [2024-12-10 21:37:53.254504] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:52.687 [2024-12-10 21:37:53.256503] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:52.687 [2024-12-10 21:37:53.256581] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:52.687 [2024-12-10 21:37:53.256610] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:52.687 [2024-12-10 21:37:53.256633] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:52.687 21:37:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.687 21:37:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:52.687 21:37:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:52.687 21:37:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:52.687 21:37:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:52.687 21:37:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:52.687 21:37:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:52.687 21:37:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:52.687 21:37:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:52.687 21:37:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:52.687 21:37:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:52.687 21:37:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:52.687 21:37:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:52.687 21:37:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:52.687 21:37:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.687 21:37:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.687 21:37:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:52.687 21:37:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.687 21:37:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:52.687 "name": "Existed_Raid", 00:10:52.687 "uuid": "941bf93d-7fba-4331-b0ea-9e335bbd1187", 00:10:52.687 "strip_size_kb": 64, 00:10:52.687 "state": "configuring", 00:10:52.687 "raid_level": "concat", 00:10:52.687 "superblock": true, 00:10:52.687 "num_base_bdevs": 3, 00:10:52.687 "num_base_bdevs_discovered": 1, 00:10:52.687 "num_base_bdevs_operational": 3, 00:10:52.687 "base_bdevs_list": [ 00:10:52.687 { 00:10:52.687 "name": "BaseBdev1", 00:10:52.687 "uuid": "b5635181-cc61-4ebe-ae77-71c3c16ab9ab", 00:10:52.687 "is_configured": true, 00:10:52.687 "data_offset": 2048, 00:10:52.687 "data_size": 63488 00:10:52.687 }, 00:10:52.687 { 00:10:52.687 "name": "BaseBdev2", 00:10:52.687 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:52.687 "is_configured": false, 00:10:52.687 "data_offset": 0, 00:10:52.687 "data_size": 0 00:10:52.687 }, 00:10:52.687 { 00:10:52.687 "name": "BaseBdev3", 00:10:52.687 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:52.687 "is_configured": false, 00:10:52.687 "data_offset": 0, 00:10:52.687 "data_size": 0 00:10:52.687 } 00:10:52.687 ] 00:10:52.687 }' 00:10:52.687 21:37:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:52.687 21:37:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.946 21:37:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:52.946 21:37:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.946 21:37:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.205 [2024-12-10 21:37:53.755746] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:53.205 BaseBdev2 00:10:53.205 21:37:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.205 21:37:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:53.205 21:37:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:53.205 21:37:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:53.205 21:37:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:53.205 21:37:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:53.205 21:37:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:53.205 21:37:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:53.205 21:37:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.205 21:37:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.205 21:37:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.205 21:37:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:53.205 21:37:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.205 21:37:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.205 [ 00:10:53.205 { 00:10:53.205 "name": "BaseBdev2", 00:10:53.205 "aliases": [ 00:10:53.205 "07064766-3dd3-4223-8d75-976843dafb05" 00:10:53.205 ], 00:10:53.205 "product_name": "Malloc disk", 00:10:53.205 "block_size": 512, 00:10:53.205 "num_blocks": 65536, 00:10:53.205 "uuid": "07064766-3dd3-4223-8d75-976843dafb05", 00:10:53.205 "assigned_rate_limits": { 00:10:53.205 "rw_ios_per_sec": 0, 00:10:53.205 "rw_mbytes_per_sec": 0, 00:10:53.205 "r_mbytes_per_sec": 0, 00:10:53.205 "w_mbytes_per_sec": 0 00:10:53.205 }, 00:10:53.205 "claimed": true, 00:10:53.205 "claim_type": "exclusive_write", 00:10:53.205 "zoned": false, 00:10:53.205 "supported_io_types": { 00:10:53.205 "read": true, 00:10:53.205 "write": true, 00:10:53.205 "unmap": true, 00:10:53.205 "flush": true, 00:10:53.205 "reset": true, 00:10:53.205 "nvme_admin": false, 00:10:53.205 "nvme_io": false, 00:10:53.205 "nvme_io_md": false, 00:10:53.205 "write_zeroes": true, 00:10:53.205 "zcopy": true, 00:10:53.205 "get_zone_info": false, 00:10:53.205 "zone_management": false, 00:10:53.205 "zone_append": false, 00:10:53.205 "compare": false, 00:10:53.205 "compare_and_write": false, 00:10:53.205 "abort": true, 00:10:53.205 "seek_hole": false, 00:10:53.205 "seek_data": false, 00:10:53.205 "copy": true, 00:10:53.205 "nvme_iov_md": false 00:10:53.205 }, 00:10:53.205 "memory_domains": [ 00:10:53.205 { 00:10:53.205 "dma_device_id": "system", 00:10:53.205 "dma_device_type": 1 00:10:53.205 }, 00:10:53.205 { 00:10:53.205 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:53.205 "dma_device_type": 2 00:10:53.205 } 00:10:53.205 ], 00:10:53.205 "driver_specific": {} 00:10:53.205 } 00:10:53.205 ] 00:10:53.205 21:37:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.206 21:37:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:53.206 21:37:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:53.206 21:37:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:53.206 21:37:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:53.206 21:37:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:53.206 21:37:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:53.206 21:37:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:53.206 21:37:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:53.206 21:37:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:53.206 21:37:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:53.206 21:37:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:53.206 21:37:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:53.206 21:37:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:53.206 21:37:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:53.206 21:37:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:53.206 21:37:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.206 21:37:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.206 21:37:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.206 21:37:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:53.206 "name": "Existed_Raid", 00:10:53.206 "uuid": "941bf93d-7fba-4331-b0ea-9e335bbd1187", 00:10:53.206 "strip_size_kb": 64, 00:10:53.206 "state": "configuring", 00:10:53.206 "raid_level": "concat", 00:10:53.206 "superblock": true, 00:10:53.206 "num_base_bdevs": 3, 00:10:53.206 "num_base_bdevs_discovered": 2, 00:10:53.206 "num_base_bdevs_operational": 3, 00:10:53.206 "base_bdevs_list": [ 00:10:53.206 { 00:10:53.206 "name": "BaseBdev1", 00:10:53.206 "uuid": "b5635181-cc61-4ebe-ae77-71c3c16ab9ab", 00:10:53.206 "is_configured": true, 00:10:53.206 "data_offset": 2048, 00:10:53.206 "data_size": 63488 00:10:53.206 }, 00:10:53.206 { 00:10:53.206 "name": "BaseBdev2", 00:10:53.206 "uuid": "07064766-3dd3-4223-8d75-976843dafb05", 00:10:53.206 "is_configured": true, 00:10:53.206 "data_offset": 2048, 00:10:53.206 "data_size": 63488 00:10:53.206 }, 00:10:53.206 { 00:10:53.206 "name": "BaseBdev3", 00:10:53.206 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:53.206 "is_configured": false, 00:10:53.206 "data_offset": 0, 00:10:53.206 "data_size": 0 00:10:53.206 } 00:10:53.206 ] 00:10:53.206 }' 00:10:53.206 21:37:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:53.206 21:37:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.465 21:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:53.465 21:37:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.465 21:37:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.725 [2024-12-10 21:37:54.289270] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:53.725 BaseBdev3 00:10:53.725 [2024-12-10 21:37:54.289674] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:53.725 [2024-12-10 21:37:54.289705] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:53.725 [2024-12-10 21:37:54.290001] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:53.725 [2024-12-10 21:37:54.290176] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:53.725 [2024-12-10 21:37:54.290188] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:53.725 21:37:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.725 [2024-12-10 21:37:54.290352] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:53.725 21:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:53.725 21:37:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:53.725 21:37:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:53.725 21:37:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:53.725 21:37:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:53.725 21:37:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:53.725 21:37:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:53.725 21:37:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.725 21:37:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.725 21:37:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.725 21:37:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:53.725 21:37:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.725 21:37:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.725 [ 00:10:53.725 { 00:10:53.725 "name": "BaseBdev3", 00:10:53.725 "aliases": [ 00:10:53.725 "8ab03012-5ebf-43ba-ab0e-5baeeb17c796" 00:10:53.725 ], 00:10:53.725 "product_name": "Malloc disk", 00:10:53.725 "block_size": 512, 00:10:53.725 "num_blocks": 65536, 00:10:53.725 "uuid": "8ab03012-5ebf-43ba-ab0e-5baeeb17c796", 00:10:53.725 "assigned_rate_limits": { 00:10:53.725 "rw_ios_per_sec": 0, 00:10:53.725 "rw_mbytes_per_sec": 0, 00:10:53.725 "r_mbytes_per_sec": 0, 00:10:53.725 "w_mbytes_per_sec": 0 00:10:53.725 }, 00:10:53.725 "claimed": true, 00:10:53.725 "claim_type": "exclusive_write", 00:10:53.725 "zoned": false, 00:10:53.725 "supported_io_types": { 00:10:53.725 "read": true, 00:10:53.725 "write": true, 00:10:53.725 "unmap": true, 00:10:53.725 "flush": true, 00:10:53.725 "reset": true, 00:10:53.725 "nvme_admin": false, 00:10:53.725 "nvme_io": false, 00:10:53.725 "nvme_io_md": false, 00:10:53.725 "write_zeroes": true, 00:10:53.725 "zcopy": true, 00:10:53.725 "get_zone_info": false, 00:10:53.725 "zone_management": false, 00:10:53.725 "zone_append": false, 00:10:53.725 "compare": false, 00:10:53.725 "compare_and_write": false, 00:10:53.725 "abort": true, 00:10:53.725 "seek_hole": false, 00:10:53.725 "seek_data": false, 00:10:53.725 "copy": true, 00:10:53.725 "nvme_iov_md": false 00:10:53.725 }, 00:10:53.725 "memory_domains": [ 00:10:53.725 { 00:10:53.725 "dma_device_id": "system", 00:10:53.725 "dma_device_type": 1 00:10:53.725 }, 00:10:53.725 { 00:10:53.725 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:53.725 "dma_device_type": 2 00:10:53.725 } 00:10:53.725 ], 00:10:53.725 "driver_specific": {} 00:10:53.725 } 00:10:53.725 ] 00:10:53.725 21:37:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.725 21:37:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:53.725 21:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:53.725 21:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:53.725 21:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:10:53.725 21:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:53.725 21:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:53.725 21:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:53.725 21:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:53.725 21:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:53.725 21:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:53.725 21:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:53.725 21:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:53.725 21:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:53.725 21:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:53.725 21:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:53.725 21:37:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.725 21:37:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.725 21:37:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.725 21:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:53.725 "name": "Existed_Raid", 00:10:53.725 "uuid": "941bf93d-7fba-4331-b0ea-9e335bbd1187", 00:10:53.725 "strip_size_kb": 64, 00:10:53.725 "state": "online", 00:10:53.725 "raid_level": "concat", 00:10:53.725 "superblock": true, 00:10:53.725 "num_base_bdevs": 3, 00:10:53.725 "num_base_bdevs_discovered": 3, 00:10:53.725 "num_base_bdevs_operational": 3, 00:10:53.725 "base_bdevs_list": [ 00:10:53.725 { 00:10:53.725 "name": "BaseBdev1", 00:10:53.725 "uuid": "b5635181-cc61-4ebe-ae77-71c3c16ab9ab", 00:10:53.725 "is_configured": true, 00:10:53.725 "data_offset": 2048, 00:10:53.725 "data_size": 63488 00:10:53.725 }, 00:10:53.725 { 00:10:53.725 "name": "BaseBdev2", 00:10:53.725 "uuid": "07064766-3dd3-4223-8d75-976843dafb05", 00:10:53.725 "is_configured": true, 00:10:53.725 "data_offset": 2048, 00:10:53.725 "data_size": 63488 00:10:53.725 }, 00:10:53.725 { 00:10:53.725 "name": "BaseBdev3", 00:10:53.725 "uuid": "8ab03012-5ebf-43ba-ab0e-5baeeb17c796", 00:10:53.725 "is_configured": true, 00:10:53.725 "data_offset": 2048, 00:10:53.725 "data_size": 63488 00:10:53.725 } 00:10:53.725 ] 00:10:53.725 }' 00:10:53.725 21:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:53.725 21:37:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.995 21:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:53.995 21:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:53.995 21:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:53.995 21:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:53.995 21:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:53.995 21:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:53.995 21:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:53.995 21:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:53.995 21:37:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.995 21:37:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.265 [2024-12-10 21:37:54.772893] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:54.265 21:37:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.265 21:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:54.265 "name": "Existed_Raid", 00:10:54.265 "aliases": [ 00:10:54.265 "941bf93d-7fba-4331-b0ea-9e335bbd1187" 00:10:54.265 ], 00:10:54.265 "product_name": "Raid Volume", 00:10:54.265 "block_size": 512, 00:10:54.265 "num_blocks": 190464, 00:10:54.265 "uuid": "941bf93d-7fba-4331-b0ea-9e335bbd1187", 00:10:54.265 "assigned_rate_limits": { 00:10:54.265 "rw_ios_per_sec": 0, 00:10:54.265 "rw_mbytes_per_sec": 0, 00:10:54.265 "r_mbytes_per_sec": 0, 00:10:54.265 "w_mbytes_per_sec": 0 00:10:54.265 }, 00:10:54.265 "claimed": false, 00:10:54.265 "zoned": false, 00:10:54.265 "supported_io_types": { 00:10:54.265 "read": true, 00:10:54.265 "write": true, 00:10:54.265 "unmap": true, 00:10:54.265 "flush": true, 00:10:54.265 "reset": true, 00:10:54.265 "nvme_admin": false, 00:10:54.265 "nvme_io": false, 00:10:54.265 "nvme_io_md": false, 00:10:54.265 "write_zeroes": true, 00:10:54.265 "zcopy": false, 00:10:54.265 "get_zone_info": false, 00:10:54.265 "zone_management": false, 00:10:54.265 "zone_append": false, 00:10:54.265 "compare": false, 00:10:54.265 "compare_and_write": false, 00:10:54.265 "abort": false, 00:10:54.265 "seek_hole": false, 00:10:54.265 "seek_data": false, 00:10:54.265 "copy": false, 00:10:54.265 "nvme_iov_md": false 00:10:54.265 }, 00:10:54.265 "memory_domains": [ 00:10:54.265 { 00:10:54.265 "dma_device_id": "system", 00:10:54.265 "dma_device_type": 1 00:10:54.265 }, 00:10:54.265 { 00:10:54.265 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:54.265 "dma_device_type": 2 00:10:54.265 }, 00:10:54.265 { 00:10:54.265 "dma_device_id": "system", 00:10:54.265 "dma_device_type": 1 00:10:54.265 }, 00:10:54.265 { 00:10:54.265 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:54.265 "dma_device_type": 2 00:10:54.265 }, 00:10:54.265 { 00:10:54.265 "dma_device_id": "system", 00:10:54.265 "dma_device_type": 1 00:10:54.265 }, 00:10:54.265 { 00:10:54.265 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:54.265 "dma_device_type": 2 00:10:54.265 } 00:10:54.265 ], 00:10:54.265 "driver_specific": { 00:10:54.265 "raid": { 00:10:54.265 "uuid": "941bf93d-7fba-4331-b0ea-9e335bbd1187", 00:10:54.265 "strip_size_kb": 64, 00:10:54.265 "state": "online", 00:10:54.265 "raid_level": "concat", 00:10:54.265 "superblock": true, 00:10:54.265 "num_base_bdevs": 3, 00:10:54.265 "num_base_bdevs_discovered": 3, 00:10:54.265 "num_base_bdevs_operational": 3, 00:10:54.265 "base_bdevs_list": [ 00:10:54.265 { 00:10:54.265 "name": "BaseBdev1", 00:10:54.265 "uuid": "b5635181-cc61-4ebe-ae77-71c3c16ab9ab", 00:10:54.265 "is_configured": true, 00:10:54.265 "data_offset": 2048, 00:10:54.265 "data_size": 63488 00:10:54.265 }, 00:10:54.265 { 00:10:54.265 "name": "BaseBdev2", 00:10:54.265 "uuid": "07064766-3dd3-4223-8d75-976843dafb05", 00:10:54.265 "is_configured": true, 00:10:54.265 "data_offset": 2048, 00:10:54.265 "data_size": 63488 00:10:54.265 }, 00:10:54.265 { 00:10:54.265 "name": "BaseBdev3", 00:10:54.265 "uuid": "8ab03012-5ebf-43ba-ab0e-5baeeb17c796", 00:10:54.265 "is_configured": true, 00:10:54.265 "data_offset": 2048, 00:10:54.265 "data_size": 63488 00:10:54.265 } 00:10:54.265 ] 00:10:54.265 } 00:10:54.265 } 00:10:54.265 }' 00:10:54.265 21:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:54.265 21:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:54.265 BaseBdev2 00:10:54.265 BaseBdev3' 00:10:54.265 21:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:54.265 21:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:54.265 21:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:54.265 21:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:54.265 21:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:54.265 21:37:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.265 21:37:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.265 21:37:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.265 21:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:54.265 21:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:54.265 21:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:54.265 21:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:54.265 21:37:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.265 21:37:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.265 21:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:54.265 21:37:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.265 21:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:54.265 21:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:54.265 21:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:54.265 21:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:54.265 21:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.265 21:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.265 21:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:54.265 21:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.530 21:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:54.530 21:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:54.530 21:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:54.530 21:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.530 21:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.530 [2024-12-10 21:37:55.068141] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:54.530 [2024-12-10 21:37:55.068273] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:54.530 [2024-12-10 21:37:55.068371] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:54.530 21:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.530 21:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:54.530 21:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:10:54.530 21:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:54.530 21:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:10:54.530 21:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:54.530 21:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:10:54.530 21:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:54.530 21:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:54.530 21:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:54.530 21:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:54.530 21:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:54.530 21:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:54.530 21:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:54.530 21:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:54.530 21:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:54.530 21:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:54.530 21:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.530 21:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:54.530 21:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.530 21:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.530 21:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:54.530 "name": "Existed_Raid", 00:10:54.530 "uuid": "941bf93d-7fba-4331-b0ea-9e335bbd1187", 00:10:54.530 "strip_size_kb": 64, 00:10:54.530 "state": "offline", 00:10:54.530 "raid_level": "concat", 00:10:54.530 "superblock": true, 00:10:54.530 "num_base_bdevs": 3, 00:10:54.530 "num_base_bdevs_discovered": 2, 00:10:54.530 "num_base_bdevs_operational": 2, 00:10:54.530 "base_bdevs_list": [ 00:10:54.530 { 00:10:54.531 "name": null, 00:10:54.531 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:54.531 "is_configured": false, 00:10:54.531 "data_offset": 0, 00:10:54.531 "data_size": 63488 00:10:54.531 }, 00:10:54.531 { 00:10:54.531 "name": "BaseBdev2", 00:10:54.531 "uuid": "07064766-3dd3-4223-8d75-976843dafb05", 00:10:54.531 "is_configured": true, 00:10:54.531 "data_offset": 2048, 00:10:54.531 "data_size": 63488 00:10:54.531 }, 00:10:54.531 { 00:10:54.531 "name": "BaseBdev3", 00:10:54.531 "uuid": "8ab03012-5ebf-43ba-ab0e-5baeeb17c796", 00:10:54.531 "is_configured": true, 00:10:54.531 "data_offset": 2048, 00:10:54.531 "data_size": 63488 00:10:54.531 } 00:10:54.531 ] 00:10:54.531 }' 00:10:54.531 21:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:54.531 21:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.102 21:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:55.102 21:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:55.102 21:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:55.102 21:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:55.102 21:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.102 21:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.102 21:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.102 21:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:55.102 21:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:55.102 21:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:55.102 21:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.102 21:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.102 [2024-12-10 21:37:55.637193] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:55.102 21:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.102 21:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:55.102 21:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:55.102 21:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:55.102 21:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:55.102 21:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.102 21:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.102 21:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.102 21:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:55.102 21:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:55.102 21:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:55.102 21:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.102 21:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.102 [2024-12-10 21:37:55.803976] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:55.102 [2024-12-10 21:37:55.804086] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:55.362 21:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.362 21:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:55.362 21:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:55.362 21:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:55.362 21:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:55.363 21:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.363 21:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.363 21:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.363 21:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:55.363 21:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:55.363 21:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:10:55.363 21:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:55.363 21:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:55.363 21:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:55.363 21:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.363 21:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.363 BaseBdev2 00:10:55.363 21:37:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.363 21:37:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:55.363 21:37:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:55.363 21:37:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:55.363 21:37:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:55.363 21:37:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:55.363 21:37:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:55.363 21:37:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:55.363 21:37:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.363 21:37:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.363 21:37:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.363 21:37:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:55.363 21:37:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.363 21:37:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.363 [ 00:10:55.363 { 00:10:55.363 "name": "BaseBdev2", 00:10:55.363 "aliases": [ 00:10:55.363 "a6399c98-d93c-4152-8dae-87bf2f3ff8a8" 00:10:55.363 ], 00:10:55.363 "product_name": "Malloc disk", 00:10:55.363 "block_size": 512, 00:10:55.363 "num_blocks": 65536, 00:10:55.363 "uuid": "a6399c98-d93c-4152-8dae-87bf2f3ff8a8", 00:10:55.363 "assigned_rate_limits": { 00:10:55.363 "rw_ios_per_sec": 0, 00:10:55.363 "rw_mbytes_per_sec": 0, 00:10:55.363 "r_mbytes_per_sec": 0, 00:10:55.363 "w_mbytes_per_sec": 0 00:10:55.363 }, 00:10:55.363 "claimed": false, 00:10:55.363 "zoned": false, 00:10:55.363 "supported_io_types": { 00:10:55.363 "read": true, 00:10:55.363 "write": true, 00:10:55.363 "unmap": true, 00:10:55.363 "flush": true, 00:10:55.363 "reset": true, 00:10:55.363 "nvme_admin": false, 00:10:55.363 "nvme_io": false, 00:10:55.363 "nvme_io_md": false, 00:10:55.363 "write_zeroes": true, 00:10:55.363 "zcopy": true, 00:10:55.363 "get_zone_info": false, 00:10:55.363 "zone_management": false, 00:10:55.363 "zone_append": false, 00:10:55.363 "compare": false, 00:10:55.363 "compare_and_write": false, 00:10:55.363 "abort": true, 00:10:55.363 "seek_hole": false, 00:10:55.363 "seek_data": false, 00:10:55.363 "copy": true, 00:10:55.363 "nvme_iov_md": false 00:10:55.363 }, 00:10:55.363 "memory_domains": [ 00:10:55.363 { 00:10:55.363 "dma_device_id": "system", 00:10:55.363 "dma_device_type": 1 00:10:55.363 }, 00:10:55.363 { 00:10:55.363 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:55.363 "dma_device_type": 2 00:10:55.363 } 00:10:55.363 ], 00:10:55.363 "driver_specific": {} 00:10:55.363 } 00:10:55.363 ] 00:10:55.363 21:37:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.363 21:37:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:55.363 21:37:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:55.363 21:37:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:55.363 21:37:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:55.363 21:37:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.363 21:37:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.363 BaseBdev3 00:10:55.363 21:37:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.363 21:37:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:55.363 21:37:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:55.363 21:37:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:55.363 21:37:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:55.363 21:37:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:55.363 21:37:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:55.363 21:37:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:55.363 21:37:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.363 21:37:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.363 21:37:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.363 21:37:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:55.363 21:37:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.363 21:37:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.363 [ 00:10:55.363 { 00:10:55.363 "name": "BaseBdev3", 00:10:55.363 "aliases": [ 00:10:55.363 "e9dcca02-bffb-4df5-9669-b62b9414f869" 00:10:55.363 ], 00:10:55.363 "product_name": "Malloc disk", 00:10:55.363 "block_size": 512, 00:10:55.363 "num_blocks": 65536, 00:10:55.363 "uuid": "e9dcca02-bffb-4df5-9669-b62b9414f869", 00:10:55.363 "assigned_rate_limits": { 00:10:55.363 "rw_ios_per_sec": 0, 00:10:55.363 "rw_mbytes_per_sec": 0, 00:10:55.363 "r_mbytes_per_sec": 0, 00:10:55.363 "w_mbytes_per_sec": 0 00:10:55.363 }, 00:10:55.363 "claimed": false, 00:10:55.363 "zoned": false, 00:10:55.363 "supported_io_types": { 00:10:55.363 "read": true, 00:10:55.363 "write": true, 00:10:55.363 "unmap": true, 00:10:55.363 "flush": true, 00:10:55.363 "reset": true, 00:10:55.363 "nvme_admin": false, 00:10:55.363 "nvme_io": false, 00:10:55.363 "nvme_io_md": false, 00:10:55.363 "write_zeroes": true, 00:10:55.363 "zcopy": true, 00:10:55.363 "get_zone_info": false, 00:10:55.363 "zone_management": false, 00:10:55.363 "zone_append": false, 00:10:55.363 "compare": false, 00:10:55.363 "compare_and_write": false, 00:10:55.363 "abort": true, 00:10:55.363 "seek_hole": false, 00:10:55.363 "seek_data": false, 00:10:55.363 "copy": true, 00:10:55.363 "nvme_iov_md": false 00:10:55.363 }, 00:10:55.363 "memory_domains": [ 00:10:55.363 { 00:10:55.363 "dma_device_id": "system", 00:10:55.363 "dma_device_type": 1 00:10:55.363 }, 00:10:55.363 { 00:10:55.363 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:55.363 "dma_device_type": 2 00:10:55.363 } 00:10:55.363 ], 00:10:55.363 "driver_specific": {} 00:10:55.363 } 00:10:55.363 ] 00:10:55.363 21:37:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.363 21:37:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:55.363 21:37:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:55.363 21:37:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:55.363 21:37:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:55.363 21:37:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.363 21:37:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.363 [2024-12-10 21:37:56.143071] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:55.363 [2024-12-10 21:37:56.143123] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:55.363 [2024-12-10 21:37:56.143150] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:55.624 [2024-12-10 21:37:56.145119] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:55.624 21:37:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.624 21:37:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:55.624 21:37:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:55.624 21:37:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:55.624 21:37:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:55.624 21:37:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:55.624 21:37:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:55.624 21:37:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:55.624 21:37:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:55.624 21:37:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:55.624 21:37:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:55.624 21:37:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:55.624 21:37:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:55.624 21:37:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.624 21:37:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.624 21:37:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.624 21:37:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:55.624 "name": "Existed_Raid", 00:10:55.624 "uuid": "1d6f569e-b5ba-49d8-844d-8b4c47e8bcdb", 00:10:55.624 "strip_size_kb": 64, 00:10:55.624 "state": "configuring", 00:10:55.624 "raid_level": "concat", 00:10:55.624 "superblock": true, 00:10:55.624 "num_base_bdevs": 3, 00:10:55.624 "num_base_bdevs_discovered": 2, 00:10:55.624 "num_base_bdevs_operational": 3, 00:10:55.624 "base_bdevs_list": [ 00:10:55.624 { 00:10:55.624 "name": "BaseBdev1", 00:10:55.624 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:55.624 "is_configured": false, 00:10:55.624 "data_offset": 0, 00:10:55.624 "data_size": 0 00:10:55.624 }, 00:10:55.624 { 00:10:55.624 "name": "BaseBdev2", 00:10:55.624 "uuid": "a6399c98-d93c-4152-8dae-87bf2f3ff8a8", 00:10:55.624 "is_configured": true, 00:10:55.624 "data_offset": 2048, 00:10:55.624 "data_size": 63488 00:10:55.624 }, 00:10:55.624 { 00:10:55.624 "name": "BaseBdev3", 00:10:55.624 "uuid": "e9dcca02-bffb-4df5-9669-b62b9414f869", 00:10:55.624 "is_configured": true, 00:10:55.624 "data_offset": 2048, 00:10:55.624 "data_size": 63488 00:10:55.624 } 00:10:55.624 ] 00:10:55.624 }' 00:10:55.624 21:37:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:55.624 21:37:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.883 21:37:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:55.884 21:37:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.884 21:37:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.884 [2024-12-10 21:37:56.606325] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:55.884 21:37:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.884 21:37:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:55.884 21:37:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:55.884 21:37:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:55.884 21:37:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:55.884 21:37:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:55.884 21:37:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:55.884 21:37:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:55.884 21:37:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:55.884 21:37:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:55.884 21:37:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:55.884 21:37:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:55.884 21:37:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:55.884 21:37:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.884 21:37:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.884 21:37:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.884 21:37:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:55.884 "name": "Existed_Raid", 00:10:55.884 "uuid": "1d6f569e-b5ba-49d8-844d-8b4c47e8bcdb", 00:10:55.884 "strip_size_kb": 64, 00:10:55.884 "state": "configuring", 00:10:55.884 "raid_level": "concat", 00:10:55.884 "superblock": true, 00:10:55.884 "num_base_bdevs": 3, 00:10:55.884 "num_base_bdevs_discovered": 1, 00:10:55.884 "num_base_bdevs_operational": 3, 00:10:55.884 "base_bdevs_list": [ 00:10:55.884 { 00:10:55.884 "name": "BaseBdev1", 00:10:55.884 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:55.884 "is_configured": false, 00:10:55.884 "data_offset": 0, 00:10:55.884 "data_size": 0 00:10:55.884 }, 00:10:55.884 { 00:10:55.884 "name": null, 00:10:55.884 "uuid": "a6399c98-d93c-4152-8dae-87bf2f3ff8a8", 00:10:55.884 "is_configured": false, 00:10:55.884 "data_offset": 0, 00:10:55.884 "data_size": 63488 00:10:55.884 }, 00:10:55.884 { 00:10:55.884 "name": "BaseBdev3", 00:10:55.884 "uuid": "e9dcca02-bffb-4df5-9669-b62b9414f869", 00:10:55.884 "is_configured": true, 00:10:55.884 "data_offset": 2048, 00:10:55.884 "data_size": 63488 00:10:55.884 } 00:10:55.884 ] 00:10:55.884 }' 00:10:55.884 21:37:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:55.884 21:37:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.454 21:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:56.454 21:37:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.454 21:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:56.454 21:37:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.454 21:37:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.454 21:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:56.454 21:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:56.454 21:37:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.454 21:37:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.454 [2024-12-10 21:37:57.168820] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:56.454 BaseBdev1 00:10:56.454 21:37:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.454 21:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:56.454 21:37:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:56.454 21:37:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:56.454 21:37:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:56.454 21:37:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:56.454 21:37:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:56.454 21:37:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:56.454 21:37:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.454 21:37:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.454 21:37:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.454 21:37:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:56.454 21:37:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.454 21:37:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.454 [ 00:10:56.454 { 00:10:56.454 "name": "BaseBdev1", 00:10:56.454 "aliases": [ 00:10:56.454 "80495099-b0cf-4a29-aa11-57ca1ad58ec8" 00:10:56.454 ], 00:10:56.454 "product_name": "Malloc disk", 00:10:56.454 "block_size": 512, 00:10:56.454 "num_blocks": 65536, 00:10:56.454 "uuid": "80495099-b0cf-4a29-aa11-57ca1ad58ec8", 00:10:56.454 "assigned_rate_limits": { 00:10:56.454 "rw_ios_per_sec": 0, 00:10:56.454 "rw_mbytes_per_sec": 0, 00:10:56.454 "r_mbytes_per_sec": 0, 00:10:56.454 "w_mbytes_per_sec": 0 00:10:56.454 }, 00:10:56.454 "claimed": true, 00:10:56.454 "claim_type": "exclusive_write", 00:10:56.454 "zoned": false, 00:10:56.454 "supported_io_types": { 00:10:56.454 "read": true, 00:10:56.454 "write": true, 00:10:56.454 "unmap": true, 00:10:56.454 "flush": true, 00:10:56.454 "reset": true, 00:10:56.454 "nvme_admin": false, 00:10:56.454 "nvme_io": false, 00:10:56.454 "nvme_io_md": false, 00:10:56.454 "write_zeroes": true, 00:10:56.454 "zcopy": true, 00:10:56.454 "get_zone_info": false, 00:10:56.454 "zone_management": false, 00:10:56.454 "zone_append": false, 00:10:56.454 "compare": false, 00:10:56.454 "compare_and_write": false, 00:10:56.454 "abort": true, 00:10:56.454 "seek_hole": false, 00:10:56.454 "seek_data": false, 00:10:56.454 "copy": true, 00:10:56.454 "nvme_iov_md": false 00:10:56.454 }, 00:10:56.454 "memory_domains": [ 00:10:56.454 { 00:10:56.454 "dma_device_id": "system", 00:10:56.454 "dma_device_type": 1 00:10:56.454 }, 00:10:56.454 { 00:10:56.454 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:56.454 "dma_device_type": 2 00:10:56.454 } 00:10:56.454 ], 00:10:56.454 "driver_specific": {} 00:10:56.454 } 00:10:56.454 ] 00:10:56.454 21:37:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.454 21:37:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:56.454 21:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:56.454 21:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:56.454 21:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:56.454 21:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:56.454 21:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:56.454 21:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:56.454 21:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:56.454 21:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:56.454 21:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:56.454 21:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:56.454 21:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:56.454 21:37:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.454 21:37:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.454 21:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:56.454 21:37:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.713 21:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:56.713 "name": "Existed_Raid", 00:10:56.713 "uuid": "1d6f569e-b5ba-49d8-844d-8b4c47e8bcdb", 00:10:56.713 "strip_size_kb": 64, 00:10:56.713 "state": "configuring", 00:10:56.713 "raid_level": "concat", 00:10:56.713 "superblock": true, 00:10:56.713 "num_base_bdevs": 3, 00:10:56.713 "num_base_bdevs_discovered": 2, 00:10:56.713 "num_base_bdevs_operational": 3, 00:10:56.713 "base_bdevs_list": [ 00:10:56.713 { 00:10:56.713 "name": "BaseBdev1", 00:10:56.713 "uuid": "80495099-b0cf-4a29-aa11-57ca1ad58ec8", 00:10:56.713 "is_configured": true, 00:10:56.713 "data_offset": 2048, 00:10:56.713 "data_size": 63488 00:10:56.713 }, 00:10:56.713 { 00:10:56.713 "name": null, 00:10:56.713 "uuid": "a6399c98-d93c-4152-8dae-87bf2f3ff8a8", 00:10:56.713 "is_configured": false, 00:10:56.713 "data_offset": 0, 00:10:56.713 "data_size": 63488 00:10:56.713 }, 00:10:56.713 { 00:10:56.713 "name": "BaseBdev3", 00:10:56.713 "uuid": "e9dcca02-bffb-4df5-9669-b62b9414f869", 00:10:56.713 "is_configured": true, 00:10:56.713 "data_offset": 2048, 00:10:56.713 "data_size": 63488 00:10:56.713 } 00:10:56.713 ] 00:10:56.713 }' 00:10:56.713 21:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:56.713 21:37:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.971 21:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:56.971 21:37:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.971 21:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:56.971 21:37:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.972 21:37:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.972 21:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:56.972 21:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:56.972 21:37:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.972 21:37:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.972 [2024-12-10 21:37:57.664067] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:56.972 21:37:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.972 21:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:56.972 21:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:56.972 21:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:56.972 21:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:56.972 21:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:56.972 21:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:56.972 21:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:56.972 21:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:56.972 21:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:56.972 21:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:56.972 21:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:56.972 21:37:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.972 21:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:56.972 21:37:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.972 21:37:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.972 21:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:56.972 "name": "Existed_Raid", 00:10:56.972 "uuid": "1d6f569e-b5ba-49d8-844d-8b4c47e8bcdb", 00:10:56.972 "strip_size_kb": 64, 00:10:56.972 "state": "configuring", 00:10:56.972 "raid_level": "concat", 00:10:56.972 "superblock": true, 00:10:56.972 "num_base_bdevs": 3, 00:10:56.972 "num_base_bdevs_discovered": 1, 00:10:56.972 "num_base_bdevs_operational": 3, 00:10:56.972 "base_bdevs_list": [ 00:10:56.972 { 00:10:56.972 "name": "BaseBdev1", 00:10:56.972 "uuid": "80495099-b0cf-4a29-aa11-57ca1ad58ec8", 00:10:56.972 "is_configured": true, 00:10:56.972 "data_offset": 2048, 00:10:56.972 "data_size": 63488 00:10:56.972 }, 00:10:56.972 { 00:10:56.972 "name": null, 00:10:56.972 "uuid": "a6399c98-d93c-4152-8dae-87bf2f3ff8a8", 00:10:56.972 "is_configured": false, 00:10:56.972 "data_offset": 0, 00:10:56.972 "data_size": 63488 00:10:56.972 }, 00:10:56.972 { 00:10:56.972 "name": null, 00:10:56.972 "uuid": "e9dcca02-bffb-4df5-9669-b62b9414f869", 00:10:56.972 "is_configured": false, 00:10:56.972 "data_offset": 0, 00:10:56.972 "data_size": 63488 00:10:56.972 } 00:10:56.972 ] 00:10:56.972 }' 00:10:56.972 21:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:56.972 21:37:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.541 21:37:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:57.541 21:37:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.541 21:37:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:57.541 21:37:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.541 21:37:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.541 21:37:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:57.541 21:37:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:57.541 21:37:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.541 21:37:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.541 [2024-12-10 21:37:58.179271] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:57.541 21:37:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.541 21:37:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:57.541 21:37:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:57.541 21:37:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:57.541 21:37:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:57.541 21:37:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:57.541 21:37:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:57.541 21:37:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:57.541 21:37:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:57.541 21:37:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:57.541 21:37:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:57.541 21:37:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:57.541 21:37:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:57.541 21:37:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.541 21:37:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.541 21:37:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.541 21:37:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:57.541 "name": "Existed_Raid", 00:10:57.541 "uuid": "1d6f569e-b5ba-49d8-844d-8b4c47e8bcdb", 00:10:57.541 "strip_size_kb": 64, 00:10:57.541 "state": "configuring", 00:10:57.541 "raid_level": "concat", 00:10:57.541 "superblock": true, 00:10:57.541 "num_base_bdevs": 3, 00:10:57.541 "num_base_bdevs_discovered": 2, 00:10:57.541 "num_base_bdevs_operational": 3, 00:10:57.541 "base_bdevs_list": [ 00:10:57.541 { 00:10:57.541 "name": "BaseBdev1", 00:10:57.541 "uuid": "80495099-b0cf-4a29-aa11-57ca1ad58ec8", 00:10:57.541 "is_configured": true, 00:10:57.541 "data_offset": 2048, 00:10:57.541 "data_size": 63488 00:10:57.541 }, 00:10:57.541 { 00:10:57.541 "name": null, 00:10:57.541 "uuid": "a6399c98-d93c-4152-8dae-87bf2f3ff8a8", 00:10:57.541 "is_configured": false, 00:10:57.541 "data_offset": 0, 00:10:57.541 "data_size": 63488 00:10:57.541 }, 00:10:57.541 { 00:10:57.541 "name": "BaseBdev3", 00:10:57.541 "uuid": "e9dcca02-bffb-4df5-9669-b62b9414f869", 00:10:57.541 "is_configured": true, 00:10:57.541 "data_offset": 2048, 00:10:57.541 "data_size": 63488 00:10:57.541 } 00:10:57.541 ] 00:10:57.541 }' 00:10:57.541 21:37:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:57.541 21:37:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.111 21:37:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:58.111 21:37:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:58.111 21:37:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.111 21:37:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.111 21:37:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.111 21:37:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:58.111 21:37:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:58.111 21:37:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.111 21:37:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.111 [2024-12-10 21:37:58.654470] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:58.111 21:37:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.111 21:37:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:58.111 21:37:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:58.111 21:37:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:58.111 21:37:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:58.111 21:37:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:58.111 21:37:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:58.111 21:37:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:58.111 21:37:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:58.111 21:37:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:58.111 21:37:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:58.111 21:37:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:58.111 21:37:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.111 21:37:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.111 21:37:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:58.111 21:37:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.111 21:37:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:58.111 "name": "Existed_Raid", 00:10:58.111 "uuid": "1d6f569e-b5ba-49d8-844d-8b4c47e8bcdb", 00:10:58.111 "strip_size_kb": 64, 00:10:58.111 "state": "configuring", 00:10:58.111 "raid_level": "concat", 00:10:58.111 "superblock": true, 00:10:58.111 "num_base_bdevs": 3, 00:10:58.111 "num_base_bdevs_discovered": 1, 00:10:58.111 "num_base_bdevs_operational": 3, 00:10:58.111 "base_bdevs_list": [ 00:10:58.111 { 00:10:58.111 "name": null, 00:10:58.111 "uuid": "80495099-b0cf-4a29-aa11-57ca1ad58ec8", 00:10:58.111 "is_configured": false, 00:10:58.111 "data_offset": 0, 00:10:58.111 "data_size": 63488 00:10:58.111 }, 00:10:58.111 { 00:10:58.111 "name": null, 00:10:58.111 "uuid": "a6399c98-d93c-4152-8dae-87bf2f3ff8a8", 00:10:58.111 "is_configured": false, 00:10:58.111 "data_offset": 0, 00:10:58.111 "data_size": 63488 00:10:58.111 }, 00:10:58.111 { 00:10:58.111 "name": "BaseBdev3", 00:10:58.111 "uuid": "e9dcca02-bffb-4df5-9669-b62b9414f869", 00:10:58.111 "is_configured": true, 00:10:58.111 "data_offset": 2048, 00:10:58.111 "data_size": 63488 00:10:58.111 } 00:10:58.111 ] 00:10:58.111 }' 00:10:58.111 21:37:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:58.111 21:37:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.692 21:37:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:58.692 21:37:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.692 21:37:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.692 21:37:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:58.692 21:37:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.692 21:37:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:58.692 21:37:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:58.692 21:37:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.692 21:37:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.692 [2024-12-10 21:37:59.272105] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:58.692 21:37:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.692 21:37:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:58.692 21:37:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:58.692 21:37:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:58.692 21:37:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:58.692 21:37:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:58.692 21:37:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:58.692 21:37:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:58.692 21:37:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:58.692 21:37:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:58.692 21:37:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:58.692 21:37:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:58.692 21:37:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:58.692 21:37:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.692 21:37:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.692 21:37:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.692 21:37:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:58.692 "name": "Existed_Raid", 00:10:58.692 "uuid": "1d6f569e-b5ba-49d8-844d-8b4c47e8bcdb", 00:10:58.692 "strip_size_kb": 64, 00:10:58.692 "state": "configuring", 00:10:58.692 "raid_level": "concat", 00:10:58.692 "superblock": true, 00:10:58.692 "num_base_bdevs": 3, 00:10:58.692 "num_base_bdevs_discovered": 2, 00:10:58.692 "num_base_bdevs_operational": 3, 00:10:58.692 "base_bdevs_list": [ 00:10:58.692 { 00:10:58.692 "name": null, 00:10:58.692 "uuid": "80495099-b0cf-4a29-aa11-57ca1ad58ec8", 00:10:58.692 "is_configured": false, 00:10:58.692 "data_offset": 0, 00:10:58.692 "data_size": 63488 00:10:58.692 }, 00:10:58.692 { 00:10:58.692 "name": "BaseBdev2", 00:10:58.692 "uuid": "a6399c98-d93c-4152-8dae-87bf2f3ff8a8", 00:10:58.692 "is_configured": true, 00:10:58.692 "data_offset": 2048, 00:10:58.692 "data_size": 63488 00:10:58.692 }, 00:10:58.692 { 00:10:58.692 "name": "BaseBdev3", 00:10:58.692 "uuid": "e9dcca02-bffb-4df5-9669-b62b9414f869", 00:10:58.692 "is_configured": true, 00:10:58.692 "data_offset": 2048, 00:10:58.692 "data_size": 63488 00:10:58.692 } 00:10:58.692 ] 00:10:58.692 }' 00:10:58.692 21:37:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:58.692 21:37:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.261 21:37:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:59.261 21:37:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:59.261 21:37:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.261 21:37:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.261 21:37:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.261 21:37:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:59.261 21:37:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:59.261 21:37:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:59.261 21:37:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.261 21:37:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.261 21:37:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.261 21:37:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 80495099-b0cf-4a29-aa11-57ca1ad58ec8 00:10:59.261 21:37:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.261 21:37:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.261 [2024-12-10 21:37:59.876048] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:59.261 [2024-12-10 21:37:59.876331] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:59.261 [2024-12-10 21:37:59.876349] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:59.261 [2024-12-10 21:37:59.876686] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:10:59.261 [2024-12-10 21:37:59.876868] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:59.261 [2024-12-10 21:37:59.876881] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:10:59.261 [2024-12-10 21:37:59.877049] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:59.261 NewBaseBdev 00:10:59.261 21:37:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.261 21:37:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:59.261 21:37:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:10:59.261 21:37:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:59.262 21:37:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:59.262 21:37:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:59.262 21:37:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:59.262 21:37:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:59.262 21:37:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.262 21:37:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.262 21:37:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.262 21:37:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:59.262 21:37:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.262 21:37:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.262 [ 00:10:59.262 { 00:10:59.262 "name": "NewBaseBdev", 00:10:59.262 "aliases": [ 00:10:59.262 "80495099-b0cf-4a29-aa11-57ca1ad58ec8" 00:10:59.262 ], 00:10:59.262 "product_name": "Malloc disk", 00:10:59.262 "block_size": 512, 00:10:59.262 "num_blocks": 65536, 00:10:59.262 "uuid": "80495099-b0cf-4a29-aa11-57ca1ad58ec8", 00:10:59.262 "assigned_rate_limits": { 00:10:59.262 "rw_ios_per_sec": 0, 00:10:59.262 "rw_mbytes_per_sec": 0, 00:10:59.262 "r_mbytes_per_sec": 0, 00:10:59.262 "w_mbytes_per_sec": 0 00:10:59.262 }, 00:10:59.262 "claimed": true, 00:10:59.262 "claim_type": "exclusive_write", 00:10:59.262 "zoned": false, 00:10:59.262 "supported_io_types": { 00:10:59.262 "read": true, 00:10:59.262 "write": true, 00:10:59.262 "unmap": true, 00:10:59.262 "flush": true, 00:10:59.262 "reset": true, 00:10:59.262 "nvme_admin": false, 00:10:59.262 "nvme_io": false, 00:10:59.262 "nvme_io_md": false, 00:10:59.262 "write_zeroes": true, 00:10:59.262 "zcopy": true, 00:10:59.262 "get_zone_info": false, 00:10:59.262 "zone_management": false, 00:10:59.262 "zone_append": false, 00:10:59.262 "compare": false, 00:10:59.262 "compare_and_write": false, 00:10:59.262 "abort": true, 00:10:59.262 "seek_hole": false, 00:10:59.262 "seek_data": false, 00:10:59.262 "copy": true, 00:10:59.262 "nvme_iov_md": false 00:10:59.262 }, 00:10:59.262 "memory_domains": [ 00:10:59.262 { 00:10:59.262 "dma_device_id": "system", 00:10:59.262 "dma_device_type": 1 00:10:59.262 }, 00:10:59.262 { 00:10:59.262 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:59.262 "dma_device_type": 2 00:10:59.262 } 00:10:59.262 ], 00:10:59.262 "driver_specific": {} 00:10:59.262 } 00:10:59.262 ] 00:10:59.262 21:37:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.262 21:37:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:59.262 21:37:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:10:59.262 21:37:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:59.262 21:37:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:59.262 21:37:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:59.262 21:37:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:59.262 21:37:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:59.262 21:37:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:59.262 21:37:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:59.262 21:37:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:59.262 21:37:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:59.262 21:37:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:59.262 21:37:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.262 21:37:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:59.262 21:37:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.262 21:37:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.262 21:37:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:59.262 "name": "Existed_Raid", 00:10:59.262 "uuid": "1d6f569e-b5ba-49d8-844d-8b4c47e8bcdb", 00:10:59.262 "strip_size_kb": 64, 00:10:59.262 "state": "online", 00:10:59.262 "raid_level": "concat", 00:10:59.262 "superblock": true, 00:10:59.262 "num_base_bdevs": 3, 00:10:59.262 "num_base_bdevs_discovered": 3, 00:10:59.262 "num_base_bdevs_operational": 3, 00:10:59.262 "base_bdevs_list": [ 00:10:59.262 { 00:10:59.262 "name": "NewBaseBdev", 00:10:59.262 "uuid": "80495099-b0cf-4a29-aa11-57ca1ad58ec8", 00:10:59.262 "is_configured": true, 00:10:59.262 "data_offset": 2048, 00:10:59.262 "data_size": 63488 00:10:59.262 }, 00:10:59.262 { 00:10:59.262 "name": "BaseBdev2", 00:10:59.262 "uuid": "a6399c98-d93c-4152-8dae-87bf2f3ff8a8", 00:10:59.262 "is_configured": true, 00:10:59.262 "data_offset": 2048, 00:10:59.262 "data_size": 63488 00:10:59.262 }, 00:10:59.262 { 00:10:59.262 "name": "BaseBdev3", 00:10:59.262 "uuid": "e9dcca02-bffb-4df5-9669-b62b9414f869", 00:10:59.262 "is_configured": true, 00:10:59.262 "data_offset": 2048, 00:10:59.262 "data_size": 63488 00:10:59.262 } 00:10:59.262 ] 00:10:59.262 }' 00:10:59.262 21:37:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:59.262 21:37:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.831 21:38:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:59.831 21:38:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:59.831 21:38:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:59.831 21:38:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:59.831 21:38:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:59.831 21:38:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:59.831 21:38:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:59.831 21:38:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.831 21:38:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.831 21:38:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:59.831 [2024-12-10 21:38:00.355702] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:59.831 21:38:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.831 21:38:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:59.831 "name": "Existed_Raid", 00:10:59.831 "aliases": [ 00:10:59.831 "1d6f569e-b5ba-49d8-844d-8b4c47e8bcdb" 00:10:59.831 ], 00:10:59.831 "product_name": "Raid Volume", 00:10:59.831 "block_size": 512, 00:10:59.831 "num_blocks": 190464, 00:10:59.831 "uuid": "1d6f569e-b5ba-49d8-844d-8b4c47e8bcdb", 00:10:59.831 "assigned_rate_limits": { 00:10:59.831 "rw_ios_per_sec": 0, 00:10:59.831 "rw_mbytes_per_sec": 0, 00:10:59.831 "r_mbytes_per_sec": 0, 00:10:59.831 "w_mbytes_per_sec": 0 00:10:59.831 }, 00:10:59.831 "claimed": false, 00:10:59.831 "zoned": false, 00:10:59.831 "supported_io_types": { 00:10:59.831 "read": true, 00:10:59.831 "write": true, 00:10:59.831 "unmap": true, 00:10:59.831 "flush": true, 00:10:59.831 "reset": true, 00:10:59.831 "nvme_admin": false, 00:10:59.831 "nvme_io": false, 00:10:59.831 "nvme_io_md": false, 00:10:59.831 "write_zeroes": true, 00:10:59.831 "zcopy": false, 00:10:59.831 "get_zone_info": false, 00:10:59.831 "zone_management": false, 00:10:59.831 "zone_append": false, 00:10:59.831 "compare": false, 00:10:59.831 "compare_and_write": false, 00:10:59.831 "abort": false, 00:10:59.831 "seek_hole": false, 00:10:59.831 "seek_data": false, 00:10:59.831 "copy": false, 00:10:59.831 "nvme_iov_md": false 00:10:59.831 }, 00:10:59.831 "memory_domains": [ 00:10:59.831 { 00:10:59.831 "dma_device_id": "system", 00:10:59.831 "dma_device_type": 1 00:10:59.831 }, 00:10:59.831 { 00:10:59.831 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:59.831 "dma_device_type": 2 00:10:59.831 }, 00:10:59.831 { 00:10:59.831 "dma_device_id": "system", 00:10:59.831 "dma_device_type": 1 00:10:59.831 }, 00:10:59.831 { 00:10:59.831 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:59.831 "dma_device_type": 2 00:10:59.831 }, 00:10:59.831 { 00:10:59.831 "dma_device_id": "system", 00:10:59.831 "dma_device_type": 1 00:10:59.831 }, 00:10:59.831 { 00:10:59.831 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:59.831 "dma_device_type": 2 00:10:59.831 } 00:10:59.831 ], 00:10:59.831 "driver_specific": { 00:10:59.831 "raid": { 00:10:59.831 "uuid": "1d6f569e-b5ba-49d8-844d-8b4c47e8bcdb", 00:10:59.831 "strip_size_kb": 64, 00:10:59.831 "state": "online", 00:10:59.831 "raid_level": "concat", 00:10:59.831 "superblock": true, 00:10:59.831 "num_base_bdevs": 3, 00:10:59.831 "num_base_bdevs_discovered": 3, 00:10:59.831 "num_base_bdevs_operational": 3, 00:10:59.831 "base_bdevs_list": [ 00:10:59.831 { 00:10:59.831 "name": "NewBaseBdev", 00:10:59.831 "uuid": "80495099-b0cf-4a29-aa11-57ca1ad58ec8", 00:10:59.831 "is_configured": true, 00:10:59.831 "data_offset": 2048, 00:10:59.831 "data_size": 63488 00:10:59.831 }, 00:10:59.831 { 00:10:59.831 "name": "BaseBdev2", 00:10:59.831 "uuid": "a6399c98-d93c-4152-8dae-87bf2f3ff8a8", 00:10:59.831 "is_configured": true, 00:10:59.831 "data_offset": 2048, 00:10:59.831 "data_size": 63488 00:10:59.831 }, 00:10:59.831 { 00:10:59.831 "name": "BaseBdev3", 00:10:59.831 "uuid": "e9dcca02-bffb-4df5-9669-b62b9414f869", 00:10:59.831 "is_configured": true, 00:10:59.831 "data_offset": 2048, 00:10:59.831 "data_size": 63488 00:10:59.832 } 00:10:59.832 ] 00:10:59.832 } 00:10:59.832 } 00:10:59.832 }' 00:10:59.832 21:38:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:59.832 21:38:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:59.832 BaseBdev2 00:10:59.832 BaseBdev3' 00:10:59.832 21:38:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:59.832 21:38:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:59.832 21:38:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:59.832 21:38:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:59.832 21:38:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:59.832 21:38:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.832 21:38:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.832 21:38:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.832 21:38:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:59.832 21:38:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:59.832 21:38:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:59.832 21:38:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:59.832 21:38:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:59.832 21:38:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.832 21:38:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.832 21:38:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.832 21:38:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:59.832 21:38:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:59.832 21:38:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:59.832 21:38:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:59.832 21:38:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:59.832 21:38:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.832 21:38:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.832 21:38:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.832 21:38:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:59.832 21:38:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:59.832 21:38:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:59.832 21:38:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.832 21:38:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.832 [2024-12-10 21:38:00.610890] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:59.832 [2024-12-10 21:38:00.610924] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:59.832 [2024-12-10 21:38:00.611019] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:59.832 [2024-12-10 21:38:00.611075] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:59.832 [2024-12-10 21:38:00.611087] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:11:00.091 21:38:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.091 21:38:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 66346 00:11:00.091 21:38:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 66346 ']' 00:11:00.091 21:38:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 66346 00:11:00.091 21:38:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:11:00.091 21:38:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:00.091 21:38:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66346 00:11:00.091 killing process with pid 66346 00:11:00.091 21:38:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:00.091 21:38:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:00.091 21:38:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66346' 00:11:00.091 21:38:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 66346 00:11:00.091 [2024-12-10 21:38:00.658882] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:00.091 21:38:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 66346 00:11:00.350 [2024-12-10 21:38:00.992464] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:01.727 21:38:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:11:01.727 00:11:01.727 real 0m10.927s 00:11:01.727 user 0m17.338s 00:11:01.727 sys 0m1.861s 00:11:01.727 21:38:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:01.727 21:38:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.727 ************************************ 00:11:01.727 END TEST raid_state_function_test_sb 00:11:01.727 ************************************ 00:11:01.727 21:38:02 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 3 00:11:01.727 21:38:02 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:01.727 21:38:02 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:01.727 21:38:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:01.727 ************************************ 00:11:01.727 START TEST raid_superblock_test 00:11:01.727 ************************************ 00:11:01.727 21:38:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 3 00:11:01.727 21:38:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:11:01.727 21:38:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:11:01.727 21:38:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:11:01.727 21:38:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:11:01.727 21:38:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:11:01.727 21:38:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:11:01.727 21:38:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:11:01.727 21:38:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:11:01.727 21:38:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:11:01.727 21:38:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:11:01.727 21:38:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:11:01.727 21:38:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:11:01.727 21:38:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:11:01.727 21:38:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:11:01.727 21:38:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:11:01.727 21:38:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:11:01.727 21:38:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=66972 00:11:01.727 21:38:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:11:01.727 21:38:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 66972 00:11:01.727 21:38:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 66972 ']' 00:11:01.727 21:38:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:01.727 21:38:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:01.727 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:01.727 21:38:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:01.727 21:38:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:01.727 21:38:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.727 [2024-12-10 21:38:02.375195] Starting SPDK v25.01-pre git sha1 cec5ba284 / DPDK 24.03.0 initialization... 00:11:01.727 [2024-12-10 21:38:02.375327] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66972 ] 00:11:01.985 [2024-12-10 21:38:02.553935] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:01.985 [2024-12-10 21:38:02.680567] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:11:02.243 [2024-12-10 21:38:02.897886] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:02.243 [2024-12-10 21:38:02.897927] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:02.502 21:38:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:02.502 21:38:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:11:02.502 21:38:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:11:02.502 21:38:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:02.502 21:38:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:11:02.502 21:38:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:11:02.502 21:38:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:11:02.502 21:38:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:02.502 21:38:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:02.502 21:38:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:02.502 21:38:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:11:02.502 21:38:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.502 21:38:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.761 malloc1 00:11:02.761 21:38:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.761 21:38:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:02.761 21:38:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.761 21:38:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.761 [2024-12-10 21:38:03.307144] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:02.761 [2024-12-10 21:38:03.307233] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:02.761 [2024-12-10 21:38:03.307257] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:02.761 [2024-12-10 21:38:03.307267] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:02.761 [2024-12-10 21:38:03.309576] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:02.761 [2024-12-10 21:38:03.309616] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:02.761 pt1 00:11:02.761 21:38:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.761 21:38:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:02.761 21:38:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:02.761 21:38:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:11:02.761 21:38:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:11:02.761 21:38:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:11:02.761 21:38:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:02.761 21:38:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:02.761 21:38:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:02.761 21:38:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:11:02.761 21:38:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.761 21:38:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.761 malloc2 00:11:02.761 21:38:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.761 21:38:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:02.761 21:38:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.761 21:38:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.761 [2024-12-10 21:38:03.365091] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:02.761 [2024-12-10 21:38:03.365156] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:02.761 [2024-12-10 21:38:03.365200] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:11:02.761 [2024-12-10 21:38:03.365211] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:02.761 [2024-12-10 21:38:03.367682] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:02.761 [2024-12-10 21:38:03.367722] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:02.761 pt2 00:11:02.761 21:38:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.761 21:38:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:02.761 21:38:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:02.761 21:38:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:11:02.761 21:38:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:11:02.761 21:38:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:11:02.761 21:38:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:02.761 21:38:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:02.761 21:38:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:02.761 21:38:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:11:02.761 21:38:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.761 21:38:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.761 malloc3 00:11:02.761 21:38:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.761 21:38:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:02.761 21:38:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.761 21:38:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.761 [2024-12-10 21:38:03.435661] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:02.761 [2024-12-10 21:38:03.435743] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:02.761 [2024-12-10 21:38:03.435769] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:11:02.761 [2024-12-10 21:38:03.435779] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:02.761 [2024-12-10 21:38:03.438043] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:02.761 [2024-12-10 21:38:03.438085] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:02.761 pt3 00:11:02.761 21:38:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.761 21:38:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:02.761 21:38:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:02.761 21:38:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:11:02.761 21:38:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.761 21:38:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.761 [2024-12-10 21:38:03.447663] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:02.761 [2024-12-10 21:38:03.449619] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:02.761 [2024-12-10 21:38:03.449692] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:02.761 [2024-12-10 21:38:03.449861] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:11:02.761 [2024-12-10 21:38:03.449876] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:11:02.761 [2024-12-10 21:38:03.450183] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:11:02.761 [2024-12-10 21:38:03.450374] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:11:02.761 [2024-12-10 21:38:03.450391] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:11:02.761 [2024-12-10 21:38:03.450600] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:02.761 21:38:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.761 21:38:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:11:02.761 21:38:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:02.761 21:38:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:02.761 21:38:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:02.761 21:38:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:02.761 21:38:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:02.761 21:38:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:02.761 21:38:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:02.761 21:38:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:02.761 21:38:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:02.761 21:38:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:02.761 21:38:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:02.761 21:38:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.761 21:38:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.761 21:38:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.761 21:38:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:02.761 "name": "raid_bdev1", 00:11:02.761 "uuid": "231c6dbf-b838-41da-8e18-329c3b837941", 00:11:02.761 "strip_size_kb": 64, 00:11:02.761 "state": "online", 00:11:02.761 "raid_level": "concat", 00:11:02.761 "superblock": true, 00:11:02.761 "num_base_bdevs": 3, 00:11:02.761 "num_base_bdevs_discovered": 3, 00:11:02.761 "num_base_bdevs_operational": 3, 00:11:02.761 "base_bdevs_list": [ 00:11:02.761 { 00:11:02.761 "name": "pt1", 00:11:02.761 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:02.761 "is_configured": true, 00:11:02.761 "data_offset": 2048, 00:11:02.761 "data_size": 63488 00:11:02.761 }, 00:11:02.761 { 00:11:02.761 "name": "pt2", 00:11:02.761 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:02.761 "is_configured": true, 00:11:02.761 "data_offset": 2048, 00:11:02.761 "data_size": 63488 00:11:02.761 }, 00:11:02.761 { 00:11:02.761 "name": "pt3", 00:11:02.761 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:02.761 "is_configured": true, 00:11:02.761 "data_offset": 2048, 00:11:02.761 "data_size": 63488 00:11:02.761 } 00:11:02.761 ] 00:11:02.761 }' 00:11:02.761 21:38:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:02.761 21:38:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.329 21:38:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:11:03.329 21:38:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:03.329 21:38:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:03.329 21:38:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:03.329 21:38:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:03.329 21:38:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:03.329 21:38:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:03.329 21:38:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:03.329 21:38:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.329 21:38:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.329 [2024-12-10 21:38:03.863266] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:03.329 21:38:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.329 21:38:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:03.329 "name": "raid_bdev1", 00:11:03.329 "aliases": [ 00:11:03.329 "231c6dbf-b838-41da-8e18-329c3b837941" 00:11:03.329 ], 00:11:03.329 "product_name": "Raid Volume", 00:11:03.329 "block_size": 512, 00:11:03.329 "num_blocks": 190464, 00:11:03.329 "uuid": "231c6dbf-b838-41da-8e18-329c3b837941", 00:11:03.329 "assigned_rate_limits": { 00:11:03.329 "rw_ios_per_sec": 0, 00:11:03.329 "rw_mbytes_per_sec": 0, 00:11:03.329 "r_mbytes_per_sec": 0, 00:11:03.329 "w_mbytes_per_sec": 0 00:11:03.329 }, 00:11:03.329 "claimed": false, 00:11:03.329 "zoned": false, 00:11:03.329 "supported_io_types": { 00:11:03.329 "read": true, 00:11:03.329 "write": true, 00:11:03.329 "unmap": true, 00:11:03.329 "flush": true, 00:11:03.329 "reset": true, 00:11:03.329 "nvme_admin": false, 00:11:03.329 "nvme_io": false, 00:11:03.329 "nvme_io_md": false, 00:11:03.329 "write_zeroes": true, 00:11:03.329 "zcopy": false, 00:11:03.329 "get_zone_info": false, 00:11:03.329 "zone_management": false, 00:11:03.329 "zone_append": false, 00:11:03.329 "compare": false, 00:11:03.329 "compare_and_write": false, 00:11:03.329 "abort": false, 00:11:03.329 "seek_hole": false, 00:11:03.329 "seek_data": false, 00:11:03.329 "copy": false, 00:11:03.329 "nvme_iov_md": false 00:11:03.329 }, 00:11:03.329 "memory_domains": [ 00:11:03.329 { 00:11:03.329 "dma_device_id": "system", 00:11:03.329 "dma_device_type": 1 00:11:03.329 }, 00:11:03.329 { 00:11:03.329 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:03.329 "dma_device_type": 2 00:11:03.329 }, 00:11:03.329 { 00:11:03.329 "dma_device_id": "system", 00:11:03.330 "dma_device_type": 1 00:11:03.330 }, 00:11:03.330 { 00:11:03.330 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:03.330 "dma_device_type": 2 00:11:03.330 }, 00:11:03.330 { 00:11:03.330 "dma_device_id": "system", 00:11:03.330 "dma_device_type": 1 00:11:03.330 }, 00:11:03.330 { 00:11:03.330 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:03.330 "dma_device_type": 2 00:11:03.330 } 00:11:03.330 ], 00:11:03.330 "driver_specific": { 00:11:03.330 "raid": { 00:11:03.330 "uuid": "231c6dbf-b838-41da-8e18-329c3b837941", 00:11:03.330 "strip_size_kb": 64, 00:11:03.330 "state": "online", 00:11:03.330 "raid_level": "concat", 00:11:03.330 "superblock": true, 00:11:03.330 "num_base_bdevs": 3, 00:11:03.330 "num_base_bdevs_discovered": 3, 00:11:03.330 "num_base_bdevs_operational": 3, 00:11:03.330 "base_bdevs_list": [ 00:11:03.330 { 00:11:03.330 "name": "pt1", 00:11:03.330 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:03.330 "is_configured": true, 00:11:03.330 "data_offset": 2048, 00:11:03.330 "data_size": 63488 00:11:03.330 }, 00:11:03.330 { 00:11:03.330 "name": "pt2", 00:11:03.330 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:03.330 "is_configured": true, 00:11:03.330 "data_offset": 2048, 00:11:03.330 "data_size": 63488 00:11:03.330 }, 00:11:03.330 { 00:11:03.330 "name": "pt3", 00:11:03.330 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:03.330 "is_configured": true, 00:11:03.330 "data_offset": 2048, 00:11:03.330 "data_size": 63488 00:11:03.330 } 00:11:03.330 ] 00:11:03.330 } 00:11:03.330 } 00:11:03.330 }' 00:11:03.330 21:38:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:03.330 21:38:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:03.330 pt2 00:11:03.330 pt3' 00:11:03.330 21:38:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:03.330 21:38:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:03.330 21:38:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:03.330 21:38:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:03.330 21:38:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:03.330 21:38:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.330 21:38:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.330 21:38:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.330 21:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:03.330 21:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:03.330 21:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:03.330 21:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:03.330 21:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:03.330 21:38:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.330 21:38:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.330 21:38:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.330 21:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:03.330 21:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:03.330 21:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:03.330 21:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:03.330 21:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:03.330 21:38:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.330 21:38:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.588 21:38:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.588 21:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:03.588 21:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:03.588 21:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:03.588 21:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:11:03.588 21:38:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.588 21:38:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.588 [2024-12-10 21:38:04.150798] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:03.588 21:38:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.588 21:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=231c6dbf-b838-41da-8e18-329c3b837941 00:11:03.588 21:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 231c6dbf-b838-41da-8e18-329c3b837941 ']' 00:11:03.588 21:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:03.588 21:38:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.588 21:38:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.588 [2024-12-10 21:38:04.198432] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:03.588 [2024-12-10 21:38:04.198524] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:03.588 [2024-12-10 21:38:04.198641] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:03.588 [2024-12-10 21:38:04.198724] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:03.588 [2024-12-10 21:38:04.198771] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:11:03.588 21:38:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.588 21:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:11:03.588 21:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:03.588 21:38:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.588 21:38:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.588 21:38:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.588 21:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:11:03.588 21:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:11:03.588 21:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:03.588 21:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:11:03.588 21:38:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.588 21:38:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.588 21:38:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.588 21:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:03.588 21:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:11:03.588 21:38:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.588 21:38:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.588 21:38:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.588 21:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:03.588 21:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:11:03.588 21:38:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.588 21:38:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.588 21:38:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.588 21:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:11:03.588 21:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:11:03.588 21:38:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.588 21:38:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.588 21:38:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.588 21:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:11:03.589 21:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:11:03.589 21:38:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:11:03.589 21:38:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:11:03.589 21:38:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:11:03.589 21:38:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:03.589 21:38:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:11:03.589 21:38:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:03.589 21:38:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:11:03.589 21:38:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.589 21:38:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.589 [2024-12-10 21:38:04.326228] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:11:03.589 [2024-12-10 21:38:04.328265] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:11:03.589 [2024-12-10 21:38:04.328373] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:11:03.589 [2024-12-10 21:38:04.328462] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:11:03.589 [2024-12-10 21:38:04.328563] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:11:03.589 [2024-12-10 21:38:04.328637] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:11:03.589 [2024-12-10 21:38:04.328726] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:03.589 [2024-12-10 21:38:04.328766] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:11:03.589 request: 00:11:03.589 { 00:11:03.589 "name": "raid_bdev1", 00:11:03.589 "raid_level": "concat", 00:11:03.589 "base_bdevs": [ 00:11:03.589 "malloc1", 00:11:03.589 "malloc2", 00:11:03.589 "malloc3" 00:11:03.589 ], 00:11:03.589 "strip_size_kb": 64, 00:11:03.589 "superblock": false, 00:11:03.589 "method": "bdev_raid_create", 00:11:03.589 "req_id": 1 00:11:03.589 } 00:11:03.589 Got JSON-RPC error response 00:11:03.589 response: 00:11:03.589 { 00:11:03.589 "code": -17, 00:11:03.589 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:11:03.589 } 00:11:03.589 21:38:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:11:03.589 21:38:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:11:03.589 21:38:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:03.589 21:38:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:03.589 21:38:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:03.589 21:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:03.589 21:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:11:03.589 21:38:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.589 21:38:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.589 21:38:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.848 21:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:11:03.848 21:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:11:03.848 21:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:03.848 21:38:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.848 21:38:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.848 [2024-12-10 21:38:04.394074] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:03.848 [2024-12-10 21:38:04.394188] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:03.848 [2024-12-10 21:38:04.394231] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:11:03.848 [2024-12-10 21:38:04.394261] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:03.848 [2024-12-10 21:38:04.396729] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:03.848 [2024-12-10 21:38:04.396812] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:03.848 [2024-12-10 21:38:04.396935] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:11:03.848 [2024-12-10 21:38:04.397033] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:03.848 pt1 00:11:03.848 21:38:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.848 21:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:11:03.848 21:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:03.848 21:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:03.848 21:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:03.848 21:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:03.848 21:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:03.848 21:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:03.848 21:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:03.848 21:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:03.848 21:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:03.848 21:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:03.848 21:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:03.848 21:38:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.848 21:38:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.848 21:38:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.848 21:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:03.848 "name": "raid_bdev1", 00:11:03.848 "uuid": "231c6dbf-b838-41da-8e18-329c3b837941", 00:11:03.848 "strip_size_kb": 64, 00:11:03.848 "state": "configuring", 00:11:03.848 "raid_level": "concat", 00:11:03.848 "superblock": true, 00:11:03.848 "num_base_bdevs": 3, 00:11:03.848 "num_base_bdevs_discovered": 1, 00:11:03.848 "num_base_bdevs_operational": 3, 00:11:03.848 "base_bdevs_list": [ 00:11:03.848 { 00:11:03.848 "name": "pt1", 00:11:03.848 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:03.848 "is_configured": true, 00:11:03.848 "data_offset": 2048, 00:11:03.848 "data_size": 63488 00:11:03.848 }, 00:11:03.848 { 00:11:03.848 "name": null, 00:11:03.848 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:03.848 "is_configured": false, 00:11:03.848 "data_offset": 2048, 00:11:03.848 "data_size": 63488 00:11:03.848 }, 00:11:03.848 { 00:11:03.848 "name": null, 00:11:03.848 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:03.848 "is_configured": false, 00:11:03.848 "data_offset": 2048, 00:11:03.848 "data_size": 63488 00:11:03.848 } 00:11:03.848 ] 00:11:03.848 }' 00:11:03.848 21:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:03.848 21:38:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.107 21:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:11:04.107 21:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:04.107 21:38:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.107 21:38:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.107 [2024-12-10 21:38:04.841358] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:04.107 [2024-12-10 21:38:04.841530] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:04.107 [2024-12-10 21:38:04.841586] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:11:04.107 [2024-12-10 21:38:04.841628] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:04.107 [2024-12-10 21:38:04.842193] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:04.107 [2024-12-10 21:38:04.842273] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:04.107 [2024-12-10 21:38:04.842415] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:04.107 [2024-12-10 21:38:04.842500] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:04.107 pt2 00:11:04.107 21:38:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.107 21:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:11:04.107 21:38:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.107 21:38:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.107 [2024-12-10 21:38:04.853324] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:11:04.107 21:38:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.107 21:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:11:04.107 21:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:04.107 21:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:04.107 21:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:04.107 21:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:04.107 21:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:04.107 21:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:04.107 21:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:04.108 21:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:04.108 21:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:04.108 21:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:04.108 21:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:04.108 21:38:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.108 21:38:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.108 21:38:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.366 21:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:04.366 "name": "raid_bdev1", 00:11:04.366 "uuid": "231c6dbf-b838-41da-8e18-329c3b837941", 00:11:04.366 "strip_size_kb": 64, 00:11:04.366 "state": "configuring", 00:11:04.366 "raid_level": "concat", 00:11:04.366 "superblock": true, 00:11:04.366 "num_base_bdevs": 3, 00:11:04.366 "num_base_bdevs_discovered": 1, 00:11:04.366 "num_base_bdevs_operational": 3, 00:11:04.366 "base_bdevs_list": [ 00:11:04.366 { 00:11:04.366 "name": "pt1", 00:11:04.366 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:04.366 "is_configured": true, 00:11:04.366 "data_offset": 2048, 00:11:04.366 "data_size": 63488 00:11:04.366 }, 00:11:04.366 { 00:11:04.366 "name": null, 00:11:04.366 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:04.366 "is_configured": false, 00:11:04.366 "data_offset": 0, 00:11:04.366 "data_size": 63488 00:11:04.366 }, 00:11:04.366 { 00:11:04.366 "name": null, 00:11:04.366 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:04.366 "is_configured": false, 00:11:04.366 "data_offset": 2048, 00:11:04.366 "data_size": 63488 00:11:04.366 } 00:11:04.366 ] 00:11:04.366 }' 00:11:04.366 21:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:04.366 21:38:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.624 21:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:11:04.624 21:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:04.624 21:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:04.624 21:38:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.624 21:38:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.624 [2024-12-10 21:38:05.272609] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:04.625 [2024-12-10 21:38:05.272775] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:04.625 [2024-12-10 21:38:05.272815] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:11:04.625 [2024-12-10 21:38:05.272849] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:04.625 [2024-12-10 21:38:05.273386] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:04.625 [2024-12-10 21:38:05.273434] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:04.625 [2024-12-10 21:38:05.273529] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:04.625 [2024-12-10 21:38:05.273555] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:04.625 pt2 00:11:04.625 21:38:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.625 21:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:04.625 21:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:04.625 21:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:04.625 21:38:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.625 21:38:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.625 [2024-12-10 21:38:05.280560] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:04.625 [2024-12-10 21:38:05.280658] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:04.625 [2024-12-10 21:38:05.280710] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:04.625 [2024-12-10 21:38:05.280745] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:04.625 [2024-12-10 21:38:05.281182] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:04.625 [2024-12-10 21:38:05.281257] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:04.625 [2024-12-10 21:38:05.281332] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:11:04.625 [2024-12-10 21:38:05.281375] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:04.625 [2024-12-10 21:38:05.281537] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:04.625 [2024-12-10 21:38:05.281562] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:11:04.625 [2024-12-10 21:38:05.281821] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:04.625 [2024-12-10 21:38:05.281987] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:04.625 [2024-12-10 21:38:05.281996] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:11:04.625 [2024-12-10 21:38:05.282159] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:04.625 pt3 00:11:04.625 21:38:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.625 21:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:04.625 21:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:04.625 21:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:11:04.625 21:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:04.625 21:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:04.625 21:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:04.625 21:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:04.625 21:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:04.625 21:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:04.625 21:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:04.625 21:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:04.625 21:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:04.625 21:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:04.625 21:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:04.625 21:38:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.625 21:38:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.625 21:38:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.625 21:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:04.625 "name": "raid_bdev1", 00:11:04.625 "uuid": "231c6dbf-b838-41da-8e18-329c3b837941", 00:11:04.625 "strip_size_kb": 64, 00:11:04.625 "state": "online", 00:11:04.625 "raid_level": "concat", 00:11:04.625 "superblock": true, 00:11:04.625 "num_base_bdevs": 3, 00:11:04.625 "num_base_bdevs_discovered": 3, 00:11:04.625 "num_base_bdevs_operational": 3, 00:11:04.625 "base_bdevs_list": [ 00:11:04.625 { 00:11:04.625 "name": "pt1", 00:11:04.625 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:04.625 "is_configured": true, 00:11:04.625 "data_offset": 2048, 00:11:04.625 "data_size": 63488 00:11:04.625 }, 00:11:04.625 { 00:11:04.625 "name": "pt2", 00:11:04.625 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:04.625 "is_configured": true, 00:11:04.625 "data_offset": 2048, 00:11:04.625 "data_size": 63488 00:11:04.625 }, 00:11:04.625 { 00:11:04.625 "name": "pt3", 00:11:04.625 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:04.625 "is_configured": true, 00:11:04.625 "data_offset": 2048, 00:11:04.625 "data_size": 63488 00:11:04.625 } 00:11:04.625 ] 00:11:04.625 }' 00:11:04.625 21:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:04.625 21:38:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.194 21:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:11:05.194 21:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:05.194 21:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:05.194 21:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:05.194 21:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:05.195 21:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:05.195 21:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:05.195 21:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:05.195 21:38:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.195 21:38:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.195 [2024-12-10 21:38:05.732205] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:05.195 21:38:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.195 21:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:05.195 "name": "raid_bdev1", 00:11:05.195 "aliases": [ 00:11:05.195 "231c6dbf-b838-41da-8e18-329c3b837941" 00:11:05.195 ], 00:11:05.195 "product_name": "Raid Volume", 00:11:05.195 "block_size": 512, 00:11:05.195 "num_blocks": 190464, 00:11:05.195 "uuid": "231c6dbf-b838-41da-8e18-329c3b837941", 00:11:05.195 "assigned_rate_limits": { 00:11:05.195 "rw_ios_per_sec": 0, 00:11:05.195 "rw_mbytes_per_sec": 0, 00:11:05.195 "r_mbytes_per_sec": 0, 00:11:05.195 "w_mbytes_per_sec": 0 00:11:05.195 }, 00:11:05.195 "claimed": false, 00:11:05.195 "zoned": false, 00:11:05.195 "supported_io_types": { 00:11:05.195 "read": true, 00:11:05.195 "write": true, 00:11:05.195 "unmap": true, 00:11:05.195 "flush": true, 00:11:05.195 "reset": true, 00:11:05.195 "nvme_admin": false, 00:11:05.195 "nvme_io": false, 00:11:05.195 "nvme_io_md": false, 00:11:05.195 "write_zeroes": true, 00:11:05.195 "zcopy": false, 00:11:05.195 "get_zone_info": false, 00:11:05.195 "zone_management": false, 00:11:05.195 "zone_append": false, 00:11:05.195 "compare": false, 00:11:05.195 "compare_and_write": false, 00:11:05.195 "abort": false, 00:11:05.195 "seek_hole": false, 00:11:05.195 "seek_data": false, 00:11:05.195 "copy": false, 00:11:05.195 "nvme_iov_md": false 00:11:05.195 }, 00:11:05.195 "memory_domains": [ 00:11:05.195 { 00:11:05.195 "dma_device_id": "system", 00:11:05.195 "dma_device_type": 1 00:11:05.195 }, 00:11:05.195 { 00:11:05.195 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:05.195 "dma_device_type": 2 00:11:05.195 }, 00:11:05.195 { 00:11:05.195 "dma_device_id": "system", 00:11:05.195 "dma_device_type": 1 00:11:05.195 }, 00:11:05.195 { 00:11:05.195 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:05.195 "dma_device_type": 2 00:11:05.195 }, 00:11:05.195 { 00:11:05.195 "dma_device_id": "system", 00:11:05.195 "dma_device_type": 1 00:11:05.195 }, 00:11:05.195 { 00:11:05.195 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:05.195 "dma_device_type": 2 00:11:05.195 } 00:11:05.195 ], 00:11:05.195 "driver_specific": { 00:11:05.195 "raid": { 00:11:05.195 "uuid": "231c6dbf-b838-41da-8e18-329c3b837941", 00:11:05.195 "strip_size_kb": 64, 00:11:05.195 "state": "online", 00:11:05.195 "raid_level": "concat", 00:11:05.195 "superblock": true, 00:11:05.195 "num_base_bdevs": 3, 00:11:05.195 "num_base_bdevs_discovered": 3, 00:11:05.195 "num_base_bdevs_operational": 3, 00:11:05.195 "base_bdevs_list": [ 00:11:05.195 { 00:11:05.195 "name": "pt1", 00:11:05.195 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:05.195 "is_configured": true, 00:11:05.195 "data_offset": 2048, 00:11:05.195 "data_size": 63488 00:11:05.195 }, 00:11:05.195 { 00:11:05.195 "name": "pt2", 00:11:05.195 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:05.195 "is_configured": true, 00:11:05.195 "data_offset": 2048, 00:11:05.195 "data_size": 63488 00:11:05.195 }, 00:11:05.195 { 00:11:05.195 "name": "pt3", 00:11:05.195 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:05.195 "is_configured": true, 00:11:05.195 "data_offset": 2048, 00:11:05.195 "data_size": 63488 00:11:05.195 } 00:11:05.195 ] 00:11:05.195 } 00:11:05.195 } 00:11:05.195 }' 00:11:05.195 21:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:05.195 21:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:05.195 pt2 00:11:05.195 pt3' 00:11:05.195 21:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:05.195 21:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:05.195 21:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:05.195 21:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:05.195 21:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:05.195 21:38:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.195 21:38:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.195 21:38:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.195 21:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:05.195 21:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:05.195 21:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:05.195 21:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:05.195 21:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:05.195 21:38:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.195 21:38:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.195 21:38:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.195 21:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:05.195 21:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:05.195 21:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:05.195 21:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:05.195 21:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:05.195 21:38:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.195 21:38:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.195 21:38:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.195 21:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:05.195 21:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:05.195 21:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:05.195 21:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:11:05.195 21:38:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.195 21:38:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.454 [2024-12-10 21:38:05.975874] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:05.454 21:38:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.454 21:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 231c6dbf-b838-41da-8e18-329c3b837941 '!=' 231c6dbf-b838-41da-8e18-329c3b837941 ']' 00:11:05.454 21:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:11:05.454 21:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:05.454 21:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:05.454 21:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 66972 00:11:05.454 21:38:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 66972 ']' 00:11:05.454 21:38:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 66972 00:11:05.454 21:38:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:11:05.454 21:38:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:05.454 21:38:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66972 00:11:05.454 killing process with pid 66972 00:11:05.454 21:38:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:05.454 21:38:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:05.454 21:38:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66972' 00:11:05.454 21:38:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 66972 00:11:05.454 [2024-12-10 21:38:06.060843] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:05.454 [2024-12-10 21:38:06.060947] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:05.454 21:38:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 66972 00:11:05.454 [2024-12-10 21:38:06.061020] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:05.454 [2024-12-10 21:38:06.061034] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:11:05.713 [2024-12-10 21:38:06.390958] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:07.093 ************************************ 00:11:07.093 END TEST raid_superblock_test 00:11:07.093 ************************************ 00:11:07.093 21:38:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:11:07.093 00:11:07.093 real 0m5.288s 00:11:07.093 user 0m7.556s 00:11:07.093 sys 0m0.899s 00:11:07.093 21:38:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:07.093 21:38:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.093 21:38:07 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 3 read 00:11:07.093 21:38:07 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:07.093 21:38:07 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:07.093 21:38:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:07.093 ************************************ 00:11:07.093 START TEST raid_read_error_test 00:11:07.093 ************************************ 00:11:07.093 21:38:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 3 read 00:11:07.093 21:38:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:11:07.093 21:38:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:11:07.093 21:38:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:11:07.093 21:38:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:07.093 21:38:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:07.093 21:38:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:07.093 21:38:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:07.093 21:38:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:07.093 21:38:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:07.093 21:38:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:07.093 21:38:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:07.093 21:38:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:07.093 21:38:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:07.093 21:38:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:07.093 21:38:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:11:07.093 21:38:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:07.093 21:38:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:07.093 21:38:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:07.093 21:38:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:07.093 21:38:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:07.093 21:38:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:07.093 21:38:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:11:07.093 21:38:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:11:07.093 21:38:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:11:07.093 21:38:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:07.093 21:38:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.CHPfaE5vUA 00:11:07.093 21:38:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=67225 00:11:07.093 21:38:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:07.093 21:38:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 67225 00:11:07.093 21:38:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 67225 ']' 00:11:07.093 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:07.093 21:38:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:07.093 21:38:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:07.093 21:38:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:07.093 21:38:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:07.093 21:38:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.093 [2024-12-10 21:38:07.726170] Starting SPDK v25.01-pre git sha1 cec5ba284 / DPDK 24.03.0 initialization... 00:11:07.093 [2024-12-10 21:38:07.726303] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67225 ] 00:11:07.353 [2024-12-10 21:38:07.882980] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:07.353 [2024-12-10 21:38:08.006659] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:11:07.612 [2024-12-10 21:38:08.231899] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:07.612 [2024-12-10 21:38:08.231970] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:07.872 21:38:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:07.872 21:38:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:11:07.872 21:38:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:07.872 21:38:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:07.872 21:38:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.872 21:38:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.872 BaseBdev1_malloc 00:11:07.872 21:38:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.872 21:38:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:07.872 21:38:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.872 21:38:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.872 true 00:11:07.872 21:38:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.872 21:38:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:07.872 21:38:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.872 21:38:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.872 [2024-12-10 21:38:08.647749] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:07.872 [2024-12-10 21:38:08.647891] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:07.872 [2024-12-10 21:38:08.647940] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:07.872 [2024-12-10 21:38:08.647982] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:07.872 [2024-12-10 21:38:08.650546] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:07.872 [2024-12-10 21:38:08.650630] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:08.132 BaseBdev1 00:11:08.132 21:38:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.132 21:38:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:08.132 21:38:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:08.132 21:38:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.132 21:38:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.132 BaseBdev2_malloc 00:11:08.132 21:38:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.132 21:38:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:08.132 21:38:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.132 21:38:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.132 true 00:11:08.132 21:38:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.132 21:38:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:08.132 21:38:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.132 21:38:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.132 [2024-12-10 21:38:08.715448] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:08.132 [2024-12-10 21:38:08.715564] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:08.132 [2024-12-10 21:38:08.715631] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:08.132 [2024-12-10 21:38:08.715670] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:08.132 [2024-12-10 21:38:08.718173] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:08.132 [2024-12-10 21:38:08.718262] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:08.132 BaseBdev2 00:11:08.132 21:38:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.132 21:38:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:08.132 21:38:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:08.132 21:38:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.132 21:38:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.132 BaseBdev3_malloc 00:11:08.132 21:38:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.132 21:38:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:08.132 21:38:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.132 21:38:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.132 true 00:11:08.132 21:38:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.132 21:38:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:08.132 21:38:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.132 21:38:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.132 [2024-12-10 21:38:08.801105] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:08.132 [2024-12-10 21:38:08.801257] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:08.132 [2024-12-10 21:38:08.801294] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:08.132 [2024-12-10 21:38:08.801310] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:08.132 [2024-12-10 21:38:08.804003] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:08.132 [2024-12-10 21:38:08.804055] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:08.132 BaseBdev3 00:11:08.132 21:38:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.132 21:38:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:11:08.132 21:38:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.132 21:38:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.132 [2024-12-10 21:38:08.809163] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:08.132 [2024-12-10 21:38:08.811289] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:08.132 [2024-12-10 21:38:08.811464] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:08.132 [2024-12-10 21:38:08.811843] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:08.133 [2024-12-10 21:38:08.811910] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:11:08.133 [2024-12-10 21:38:08.812299] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:11:08.133 [2024-12-10 21:38:08.812565] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:08.133 [2024-12-10 21:38:08.812621] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:11:08.133 [2024-12-10 21:38:08.812886] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:08.133 21:38:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.133 21:38:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:11:08.133 21:38:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:08.133 21:38:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:08.133 21:38:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:08.133 21:38:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:08.133 21:38:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:08.133 21:38:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:08.133 21:38:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:08.133 21:38:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:08.133 21:38:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:08.133 21:38:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:08.133 21:38:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:08.133 21:38:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.133 21:38:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.133 21:38:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.133 21:38:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:08.133 "name": "raid_bdev1", 00:11:08.133 "uuid": "973578d8-1675-411b-8a7b-261a2263f2ac", 00:11:08.133 "strip_size_kb": 64, 00:11:08.133 "state": "online", 00:11:08.133 "raid_level": "concat", 00:11:08.133 "superblock": true, 00:11:08.133 "num_base_bdevs": 3, 00:11:08.133 "num_base_bdevs_discovered": 3, 00:11:08.133 "num_base_bdevs_operational": 3, 00:11:08.133 "base_bdevs_list": [ 00:11:08.133 { 00:11:08.133 "name": "BaseBdev1", 00:11:08.133 "uuid": "b2ef7382-fa7d-58b9-81a8-fef89ffcad51", 00:11:08.133 "is_configured": true, 00:11:08.133 "data_offset": 2048, 00:11:08.133 "data_size": 63488 00:11:08.133 }, 00:11:08.133 { 00:11:08.133 "name": "BaseBdev2", 00:11:08.133 "uuid": "f400903a-f546-5645-a27d-44146d27060a", 00:11:08.133 "is_configured": true, 00:11:08.133 "data_offset": 2048, 00:11:08.133 "data_size": 63488 00:11:08.133 }, 00:11:08.133 { 00:11:08.133 "name": "BaseBdev3", 00:11:08.133 "uuid": "b9f53545-0cbc-58a5-8e81-03779d85803a", 00:11:08.133 "is_configured": true, 00:11:08.133 "data_offset": 2048, 00:11:08.133 "data_size": 63488 00:11:08.133 } 00:11:08.133 ] 00:11:08.133 }' 00:11:08.133 21:38:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:08.133 21:38:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.702 21:38:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:08.702 21:38:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:08.702 [2024-12-10 21:38:09.389607] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:11:09.639 21:38:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:11:09.639 21:38:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.640 21:38:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.640 21:38:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.640 21:38:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:09.640 21:38:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:11:09.640 21:38:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:11:09.640 21:38:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:11:09.640 21:38:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:09.640 21:38:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:09.640 21:38:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:09.640 21:38:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:09.640 21:38:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:09.640 21:38:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:09.640 21:38:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:09.640 21:38:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:09.640 21:38:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:09.640 21:38:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:09.640 21:38:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:09.640 21:38:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.640 21:38:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.640 21:38:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.640 21:38:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:09.640 "name": "raid_bdev1", 00:11:09.640 "uuid": "973578d8-1675-411b-8a7b-261a2263f2ac", 00:11:09.640 "strip_size_kb": 64, 00:11:09.640 "state": "online", 00:11:09.640 "raid_level": "concat", 00:11:09.640 "superblock": true, 00:11:09.640 "num_base_bdevs": 3, 00:11:09.640 "num_base_bdevs_discovered": 3, 00:11:09.640 "num_base_bdevs_operational": 3, 00:11:09.640 "base_bdevs_list": [ 00:11:09.640 { 00:11:09.640 "name": "BaseBdev1", 00:11:09.640 "uuid": "b2ef7382-fa7d-58b9-81a8-fef89ffcad51", 00:11:09.640 "is_configured": true, 00:11:09.640 "data_offset": 2048, 00:11:09.640 "data_size": 63488 00:11:09.640 }, 00:11:09.640 { 00:11:09.640 "name": "BaseBdev2", 00:11:09.640 "uuid": "f400903a-f546-5645-a27d-44146d27060a", 00:11:09.640 "is_configured": true, 00:11:09.640 "data_offset": 2048, 00:11:09.640 "data_size": 63488 00:11:09.640 }, 00:11:09.640 { 00:11:09.640 "name": "BaseBdev3", 00:11:09.640 "uuid": "b9f53545-0cbc-58a5-8e81-03779d85803a", 00:11:09.640 "is_configured": true, 00:11:09.640 "data_offset": 2048, 00:11:09.640 "data_size": 63488 00:11:09.640 } 00:11:09.640 ] 00:11:09.640 }' 00:11:09.640 21:38:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:09.640 21:38:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.208 21:38:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:10.208 21:38:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.208 21:38:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.208 [2024-12-10 21:38:10.761844] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:10.208 [2024-12-10 21:38:10.761981] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:10.208 [2024-12-10 21:38:10.765330] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:10.208 [2024-12-10 21:38:10.765453] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:10.208 [2024-12-10 21:38:10.765514] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:10.208 [2024-12-10 21:38:10.765579] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:11:10.208 { 00:11:10.208 "results": [ 00:11:10.208 { 00:11:10.208 "job": "raid_bdev1", 00:11:10.208 "core_mask": "0x1", 00:11:10.208 "workload": "randrw", 00:11:10.208 "percentage": 50, 00:11:10.208 "status": "finished", 00:11:10.208 "queue_depth": 1, 00:11:10.208 "io_size": 131072, 00:11:10.208 "runtime": 1.37328, 00:11:10.208 "iops": 14185.016893859956, 00:11:10.208 "mibps": 1773.1271117324945, 00:11:10.208 "io_failed": 1, 00:11:10.208 "io_timeout": 0, 00:11:10.208 "avg_latency_us": 97.50875570396775, 00:11:10.208 "min_latency_us": 28.17117903930131, 00:11:10.208 "max_latency_us": 1445.2262008733624 00:11:10.208 } 00:11:10.208 ], 00:11:10.208 "core_count": 1 00:11:10.208 } 00:11:10.208 21:38:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.208 21:38:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 67225 00:11:10.208 21:38:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 67225 ']' 00:11:10.208 21:38:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 67225 00:11:10.208 21:38:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:11:10.208 21:38:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:10.208 21:38:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67225 00:11:10.208 killing process with pid 67225 00:11:10.208 21:38:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:10.208 21:38:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:10.208 21:38:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67225' 00:11:10.208 21:38:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 67225 00:11:10.208 [2024-12-10 21:38:10.806227] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:10.208 21:38:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 67225 00:11:10.470 [2024-12-10 21:38:11.053390] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:11.874 21:38:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:11.874 21:38:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.CHPfaE5vUA 00:11:11.874 21:38:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:11.874 21:38:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:11:11.874 21:38:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:11:11.874 ************************************ 00:11:11.874 END TEST raid_read_error_test 00:11:11.874 ************************************ 00:11:11.874 21:38:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:11.874 21:38:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:11.874 21:38:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:11:11.874 00:11:11.874 real 0m4.694s 00:11:11.874 user 0m5.610s 00:11:11.874 sys 0m0.567s 00:11:11.874 21:38:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:11.874 21:38:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.874 21:38:12 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 3 write 00:11:11.874 21:38:12 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:11.874 21:38:12 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:11.874 21:38:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:11.874 ************************************ 00:11:11.874 START TEST raid_write_error_test 00:11:11.874 ************************************ 00:11:11.874 21:38:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 3 write 00:11:11.874 21:38:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:11:11.874 21:38:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:11:11.874 21:38:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:11:11.874 21:38:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:11.874 21:38:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:11.874 21:38:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:11.874 21:38:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:11.874 21:38:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:11.874 21:38:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:11.874 21:38:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:11.874 21:38:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:11.874 21:38:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:11.874 21:38:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:11.874 21:38:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:11.874 21:38:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:11:11.874 21:38:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:11.874 21:38:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:11.874 21:38:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:11.874 21:38:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:11.874 21:38:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:11.874 21:38:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:11.874 21:38:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:11:11.874 21:38:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:11:11.874 21:38:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:11:11.874 21:38:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:11.874 21:38:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.e3Wisy8kef 00:11:11.874 21:38:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=67371 00:11:11.874 21:38:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:11.874 21:38:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 67371 00:11:11.874 21:38:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 67371 ']' 00:11:11.874 21:38:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:11.874 21:38:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:11.874 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:11.874 21:38:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:11.874 21:38:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:11.874 21:38:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.874 [2024-12-10 21:38:12.509494] Starting SPDK v25.01-pre git sha1 cec5ba284 / DPDK 24.03.0 initialization... 00:11:11.874 [2024-12-10 21:38:12.509625] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67371 ] 00:11:12.141 [2024-12-10 21:38:12.688516] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:12.141 [2024-12-10 21:38:12.815409] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:11:12.399 [2024-12-10 21:38:13.039395] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:12.399 [2024-12-10 21:38:13.039593] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:12.657 21:38:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:12.657 21:38:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:11:12.657 21:38:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:12.657 21:38:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:12.657 21:38:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.657 21:38:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.916 BaseBdev1_malloc 00:11:12.916 21:38:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.916 21:38:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:12.916 21:38:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.916 21:38:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.916 true 00:11:12.916 21:38:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.916 21:38:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:12.916 21:38:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.916 21:38:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.916 [2024-12-10 21:38:13.460575] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:12.916 [2024-12-10 21:38:13.460705] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:12.916 [2024-12-10 21:38:13.460738] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:12.916 [2024-12-10 21:38:13.460751] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:12.916 [2024-12-10 21:38:13.463254] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:12.916 [2024-12-10 21:38:13.463363] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:12.916 BaseBdev1 00:11:12.916 21:38:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.916 21:38:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:12.916 21:38:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:12.916 21:38:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.916 21:38:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.916 BaseBdev2_malloc 00:11:12.916 21:38:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.916 21:38:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:12.916 21:38:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.916 21:38:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.916 true 00:11:12.916 21:38:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.916 21:38:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:12.916 21:38:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.916 21:38:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.916 [2024-12-10 21:38:13.530301] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:12.916 [2024-12-10 21:38:13.530471] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:12.916 [2024-12-10 21:38:13.530522] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:12.916 [2024-12-10 21:38:13.530589] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:12.916 [2024-12-10 21:38:13.533131] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:12.916 [2024-12-10 21:38:13.533231] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:12.916 BaseBdev2 00:11:12.916 21:38:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.916 21:38:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:12.916 21:38:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:12.916 21:38:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.916 21:38:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.916 BaseBdev3_malloc 00:11:12.916 21:38:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.916 21:38:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:12.916 21:38:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.916 21:38:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.916 true 00:11:12.916 21:38:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.916 21:38:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:12.916 21:38:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.916 21:38:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.916 [2024-12-10 21:38:13.611708] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:12.916 [2024-12-10 21:38:13.611831] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:12.916 [2024-12-10 21:38:13.611874] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:12.916 [2024-12-10 21:38:13.611911] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:12.916 [2024-12-10 21:38:13.614394] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:12.916 [2024-12-10 21:38:13.614495] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:12.916 BaseBdev3 00:11:12.916 21:38:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.916 21:38:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:11:12.916 21:38:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.916 21:38:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.916 [2024-12-10 21:38:13.623832] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:12.916 [2024-12-10 21:38:13.625886] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:12.916 [2024-12-10 21:38:13.626034] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:12.916 [2024-12-10 21:38:13.626308] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:12.916 [2024-12-10 21:38:13.626362] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:11:12.916 [2024-12-10 21:38:13.626710] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:11:12.916 [2024-12-10 21:38:13.626930] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:12.916 [2024-12-10 21:38:13.626982] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:11:12.916 [2024-12-10 21:38:13.627212] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:12.916 21:38:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.916 21:38:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:11:12.916 21:38:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:12.916 21:38:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:12.916 21:38:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:12.916 21:38:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:12.916 21:38:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:12.916 21:38:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:12.916 21:38:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:12.916 21:38:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:12.916 21:38:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:12.916 21:38:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:12.916 21:38:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:12.916 21:38:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.916 21:38:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.916 21:38:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.916 21:38:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:12.916 "name": "raid_bdev1", 00:11:12.916 "uuid": "a23c7c94-1549-4ad9-ada1-1d88704a6a76", 00:11:12.916 "strip_size_kb": 64, 00:11:12.916 "state": "online", 00:11:12.916 "raid_level": "concat", 00:11:12.916 "superblock": true, 00:11:12.916 "num_base_bdevs": 3, 00:11:12.916 "num_base_bdevs_discovered": 3, 00:11:12.916 "num_base_bdevs_operational": 3, 00:11:12.916 "base_bdevs_list": [ 00:11:12.916 { 00:11:12.916 "name": "BaseBdev1", 00:11:12.916 "uuid": "8b8a8714-57c3-5e32-a978-d68ddd8ad005", 00:11:12.916 "is_configured": true, 00:11:12.916 "data_offset": 2048, 00:11:12.916 "data_size": 63488 00:11:12.916 }, 00:11:12.916 { 00:11:12.916 "name": "BaseBdev2", 00:11:12.916 "uuid": "fabb3b4a-3fab-50a3-a7b2-f80e19677717", 00:11:12.916 "is_configured": true, 00:11:12.916 "data_offset": 2048, 00:11:12.916 "data_size": 63488 00:11:12.916 }, 00:11:12.916 { 00:11:12.916 "name": "BaseBdev3", 00:11:12.916 "uuid": "7148e6bc-37d9-5ae9-9b5c-ded41a476cf3", 00:11:12.916 "is_configured": true, 00:11:12.916 "data_offset": 2048, 00:11:12.916 "data_size": 63488 00:11:12.916 } 00:11:12.916 ] 00:11:12.916 }' 00:11:12.916 21:38:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:12.916 21:38:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.483 21:38:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:13.483 21:38:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:13.483 [2024-12-10 21:38:14.196370] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:11:14.418 21:38:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:11:14.418 21:38:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.418 21:38:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.418 21:38:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.418 21:38:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:14.418 21:38:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:11:14.418 21:38:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:11:14.418 21:38:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:11:14.418 21:38:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:14.418 21:38:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:14.418 21:38:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:14.418 21:38:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:14.418 21:38:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:14.418 21:38:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:14.418 21:38:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:14.418 21:38:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:14.418 21:38:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:14.418 21:38:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:14.418 21:38:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:14.418 21:38:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.418 21:38:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.418 21:38:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.418 21:38:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:14.418 "name": "raid_bdev1", 00:11:14.418 "uuid": "a23c7c94-1549-4ad9-ada1-1d88704a6a76", 00:11:14.418 "strip_size_kb": 64, 00:11:14.418 "state": "online", 00:11:14.418 "raid_level": "concat", 00:11:14.418 "superblock": true, 00:11:14.418 "num_base_bdevs": 3, 00:11:14.418 "num_base_bdevs_discovered": 3, 00:11:14.418 "num_base_bdevs_operational": 3, 00:11:14.418 "base_bdevs_list": [ 00:11:14.418 { 00:11:14.418 "name": "BaseBdev1", 00:11:14.418 "uuid": "8b8a8714-57c3-5e32-a978-d68ddd8ad005", 00:11:14.418 "is_configured": true, 00:11:14.418 "data_offset": 2048, 00:11:14.418 "data_size": 63488 00:11:14.418 }, 00:11:14.418 { 00:11:14.418 "name": "BaseBdev2", 00:11:14.418 "uuid": "fabb3b4a-3fab-50a3-a7b2-f80e19677717", 00:11:14.418 "is_configured": true, 00:11:14.418 "data_offset": 2048, 00:11:14.418 "data_size": 63488 00:11:14.418 }, 00:11:14.418 { 00:11:14.418 "name": "BaseBdev3", 00:11:14.418 "uuid": "7148e6bc-37d9-5ae9-9b5c-ded41a476cf3", 00:11:14.418 "is_configured": true, 00:11:14.418 "data_offset": 2048, 00:11:14.418 "data_size": 63488 00:11:14.418 } 00:11:14.418 ] 00:11:14.418 }' 00:11:14.418 21:38:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:14.418 21:38:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.985 21:38:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:14.985 21:38:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.985 21:38:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.985 [2024-12-10 21:38:15.557717] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:14.985 [2024-12-10 21:38:15.557828] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:14.985 [2024-12-10 21:38:15.561085] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:14.985 [2024-12-10 21:38:15.561187] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:14.985 [2024-12-10 21:38:15.561252] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:14.985 [2024-12-10 21:38:15.561325] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, sta{ 00:11:14.985 "results": [ 00:11:14.985 { 00:11:14.985 "job": "raid_bdev1", 00:11:14.985 "core_mask": "0x1", 00:11:14.985 "workload": "randrw", 00:11:14.985 "percentage": 50, 00:11:14.985 "status": "finished", 00:11:14.985 "queue_depth": 1, 00:11:14.985 "io_size": 131072, 00:11:14.985 "runtime": 1.362119, 00:11:14.985 "iops": 13528.92074774671, 00:11:14.985 "mibps": 1691.1150934683387, 00:11:14.985 "io_failed": 1, 00:11:14.985 "io_timeout": 0, 00:11:14.985 "avg_latency_us": 102.26866494117279, 00:11:14.985 "min_latency_us": 29.289082969432314, 00:11:14.985 "max_latency_us": 1738.564192139738 00:11:14.985 } 00:11:14.985 ], 00:11:14.985 "core_count": 1 00:11:14.985 } 00:11:14.985 te offline 00:11:14.985 21:38:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.985 21:38:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 67371 00:11:14.985 21:38:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 67371 ']' 00:11:14.985 21:38:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 67371 00:11:14.985 21:38:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:11:14.985 21:38:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:14.985 21:38:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67371 00:11:14.985 21:38:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:14.985 21:38:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:14.985 21:38:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67371' 00:11:14.985 killing process with pid 67371 00:11:14.985 21:38:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 67371 00:11:14.985 [2024-12-10 21:38:15.610765] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:14.985 21:38:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 67371 00:11:15.243 [2024-12-10 21:38:15.877406] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:16.637 21:38:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.e3Wisy8kef 00:11:16.637 21:38:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:16.637 21:38:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:16.637 21:38:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:11:16.637 21:38:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:11:16.638 21:38:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:16.638 21:38:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:16.638 21:38:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:11:16.638 00:11:16.638 real 0m4.790s 00:11:16.638 user 0m5.725s 00:11:16.638 sys 0m0.557s 00:11:16.638 ************************************ 00:11:16.638 END TEST raid_write_error_test 00:11:16.638 ************************************ 00:11:16.638 21:38:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:16.638 21:38:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.638 21:38:17 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:11:16.638 21:38:17 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 3 false 00:11:16.638 21:38:17 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:16.638 21:38:17 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:16.638 21:38:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:16.638 ************************************ 00:11:16.638 START TEST raid_state_function_test 00:11:16.638 ************************************ 00:11:16.638 21:38:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 3 false 00:11:16.638 21:38:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:11:16.638 21:38:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:11:16.638 21:38:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:11:16.638 21:38:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:16.638 21:38:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:16.638 21:38:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:16.638 21:38:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:16.638 21:38:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:16.638 21:38:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:16.638 21:38:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:16.638 21:38:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:16.638 21:38:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:16.638 21:38:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:16.638 21:38:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:16.638 21:38:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:16.638 21:38:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:11:16.638 21:38:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:16.638 21:38:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:16.638 21:38:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:16.638 21:38:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:16.638 21:38:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:16.638 21:38:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:11:16.638 21:38:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:11:16.638 21:38:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:11:16.638 21:38:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:11:16.638 21:38:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=67514 00:11:16.638 21:38:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:16.638 21:38:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 67514' 00:11:16.638 Process raid pid: 67514 00:11:16.638 21:38:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 67514 00:11:16.638 21:38:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 67514 ']' 00:11:16.638 21:38:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:16.638 21:38:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:16.638 21:38:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:16.638 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:16.638 21:38:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:16.638 21:38:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.638 [2024-12-10 21:38:17.367608] Starting SPDK v25.01-pre git sha1 cec5ba284 / DPDK 24.03.0 initialization... 00:11:16.638 [2024-12-10 21:38:17.367870] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:16.900 [2024-12-10 21:38:17.550062] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:16.900 [2024-12-10 21:38:17.674941] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:11:17.158 [2024-12-10 21:38:17.887015] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:17.158 [2024-12-10 21:38:17.887156] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:17.724 21:38:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:17.724 21:38:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:11:17.724 21:38:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:17.724 21:38:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.724 21:38:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.724 [2024-12-10 21:38:18.245539] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:17.724 [2024-12-10 21:38:18.245695] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:17.724 [2024-12-10 21:38:18.245733] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:17.724 [2024-12-10 21:38:18.245761] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:17.724 [2024-12-10 21:38:18.245782] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:17.724 [2024-12-10 21:38:18.245806] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:17.724 21:38:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.724 21:38:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:17.724 21:38:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:17.724 21:38:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:17.724 21:38:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:17.724 21:38:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:17.724 21:38:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:17.724 21:38:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:17.724 21:38:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:17.724 21:38:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:17.724 21:38:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:17.724 21:38:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:17.724 21:38:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:17.724 21:38:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.724 21:38:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.724 21:38:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.724 21:38:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:17.724 "name": "Existed_Raid", 00:11:17.724 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:17.724 "strip_size_kb": 0, 00:11:17.724 "state": "configuring", 00:11:17.724 "raid_level": "raid1", 00:11:17.724 "superblock": false, 00:11:17.724 "num_base_bdevs": 3, 00:11:17.724 "num_base_bdevs_discovered": 0, 00:11:17.724 "num_base_bdevs_operational": 3, 00:11:17.724 "base_bdevs_list": [ 00:11:17.724 { 00:11:17.724 "name": "BaseBdev1", 00:11:17.724 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:17.724 "is_configured": false, 00:11:17.724 "data_offset": 0, 00:11:17.724 "data_size": 0 00:11:17.724 }, 00:11:17.724 { 00:11:17.724 "name": "BaseBdev2", 00:11:17.724 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:17.724 "is_configured": false, 00:11:17.724 "data_offset": 0, 00:11:17.724 "data_size": 0 00:11:17.724 }, 00:11:17.724 { 00:11:17.724 "name": "BaseBdev3", 00:11:17.724 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:17.724 "is_configured": false, 00:11:17.724 "data_offset": 0, 00:11:17.724 "data_size": 0 00:11:17.724 } 00:11:17.724 ] 00:11:17.724 }' 00:11:17.724 21:38:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:17.724 21:38:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.983 21:38:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:17.983 21:38:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.983 21:38:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.983 [2024-12-10 21:38:18.656775] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:17.983 [2024-12-10 21:38:18.656817] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:17.983 21:38:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.983 21:38:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:17.983 21:38:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.983 21:38:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.983 [2024-12-10 21:38:18.668741] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:17.983 [2024-12-10 21:38:18.668794] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:17.983 [2024-12-10 21:38:18.668804] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:17.983 [2024-12-10 21:38:18.668825] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:17.983 [2024-12-10 21:38:18.668831] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:17.983 [2024-12-10 21:38:18.668840] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:17.983 21:38:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.983 21:38:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:17.983 21:38:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.983 21:38:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.983 [2024-12-10 21:38:18.717715] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:17.983 BaseBdev1 00:11:17.983 21:38:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.983 21:38:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:17.983 21:38:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:17.983 21:38:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:17.983 21:38:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:17.983 21:38:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:17.983 21:38:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:17.983 21:38:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:17.983 21:38:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.983 21:38:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.983 21:38:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.983 21:38:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:17.983 21:38:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.983 21:38:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.983 [ 00:11:17.983 { 00:11:17.983 "name": "BaseBdev1", 00:11:17.983 "aliases": [ 00:11:17.983 "6d4700ee-8c7b-408d-a70e-37611f5eabad" 00:11:17.983 ], 00:11:17.983 "product_name": "Malloc disk", 00:11:17.983 "block_size": 512, 00:11:17.983 "num_blocks": 65536, 00:11:17.983 "uuid": "6d4700ee-8c7b-408d-a70e-37611f5eabad", 00:11:17.983 "assigned_rate_limits": { 00:11:17.983 "rw_ios_per_sec": 0, 00:11:17.983 "rw_mbytes_per_sec": 0, 00:11:17.983 "r_mbytes_per_sec": 0, 00:11:17.983 "w_mbytes_per_sec": 0 00:11:17.983 }, 00:11:17.983 "claimed": true, 00:11:17.983 "claim_type": "exclusive_write", 00:11:17.983 "zoned": false, 00:11:17.983 "supported_io_types": { 00:11:17.983 "read": true, 00:11:17.983 "write": true, 00:11:17.983 "unmap": true, 00:11:17.983 "flush": true, 00:11:17.983 "reset": true, 00:11:17.983 "nvme_admin": false, 00:11:17.983 "nvme_io": false, 00:11:17.983 "nvme_io_md": false, 00:11:17.983 "write_zeroes": true, 00:11:17.983 "zcopy": true, 00:11:17.983 "get_zone_info": false, 00:11:17.983 "zone_management": false, 00:11:17.983 "zone_append": false, 00:11:17.983 "compare": false, 00:11:17.983 "compare_and_write": false, 00:11:17.983 "abort": true, 00:11:17.983 "seek_hole": false, 00:11:17.983 "seek_data": false, 00:11:17.983 "copy": true, 00:11:17.983 "nvme_iov_md": false 00:11:17.983 }, 00:11:17.983 "memory_domains": [ 00:11:17.983 { 00:11:17.983 "dma_device_id": "system", 00:11:17.983 "dma_device_type": 1 00:11:17.983 }, 00:11:17.983 { 00:11:17.983 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:17.983 "dma_device_type": 2 00:11:17.983 } 00:11:17.983 ], 00:11:17.983 "driver_specific": {} 00:11:17.983 } 00:11:17.983 ] 00:11:17.983 21:38:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.983 21:38:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:17.983 21:38:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:17.983 21:38:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:17.983 21:38:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:17.983 21:38:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:17.983 21:38:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:17.983 21:38:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:17.983 21:38:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:17.983 21:38:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:17.983 21:38:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:17.983 21:38:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:18.242 21:38:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:18.242 21:38:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:18.242 21:38:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.242 21:38:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.242 21:38:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.242 21:38:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:18.242 "name": "Existed_Raid", 00:11:18.242 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:18.242 "strip_size_kb": 0, 00:11:18.242 "state": "configuring", 00:11:18.242 "raid_level": "raid1", 00:11:18.242 "superblock": false, 00:11:18.242 "num_base_bdevs": 3, 00:11:18.242 "num_base_bdevs_discovered": 1, 00:11:18.242 "num_base_bdevs_operational": 3, 00:11:18.242 "base_bdevs_list": [ 00:11:18.242 { 00:11:18.242 "name": "BaseBdev1", 00:11:18.242 "uuid": "6d4700ee-8c7b-408d-a70e-37611f5eabad", 00:11:18.242 "is_configured": true, 00:11:18.242 "data_offset": 0, 00:11:18.242 "data_size": 65536 00:11:18.242 }, 00:11:18.242 { 00:11:18.242 "name": "BaseBdev2", 00:11:18.242 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:18.242 "is_configured": false, 00:11:18.242 "data_offset": 0, 00:11:18.242 "data_size": 0 00:11:18.242 }, 00:11:18.242 { 00:11:18.242 "name": "BaseBdev3", 00:11:18.242 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:18.242 "is_configured": false, 00:11:18.242 "data_offset": 0, 00:11:18.242 "data_size": 0 00:11:18.242 } 00:11:18.242 ] 00:11:18.242 }' 00:11:18.242 21:38:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:18.242 21:38:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.501 21:38:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:18.501 21:38:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.501 21:38:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.501 [2024-12-10 21:38:19.232930] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:18.501 [2024-12-10 21:38:19.232993] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:18.501 21:38:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.501 21:38:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:18.501 21:38:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.501 21:38:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.501 [2024-12-10 21:38:19.244958] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:18.501 [2024-12-10 21:38:19.247066] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:18.501 [2024-12-10 21:38:19.247110] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:18.501 [2024-12-10 21:38:19.247122] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:18.501 [2024-12-10 21:38:19.247131] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:18.501 21:38:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.501 21:38:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:18.501 21:38:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:18.501 21:38:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:18.501 21:38:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:18.501 21:38:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:18.501 21:38:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:18.501 21:38:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:18.501 21:38:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:18.501 21:38:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:18.501 21:38:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:18.501 21:38:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:18.501 21:38:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:18.501 21:38:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:18.501 21:38:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:18.501 21:38:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.501 21:38:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.501 21:38:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.760 21:38:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:18.760 "name": "Existed_Raid", 00:11:18.760 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:18.760 "strip_size_kb": 0, 00:11:18.760 "state": "configuring", 00:11:18.760 "raid_level": "raid1", 00:11:18.760 "superblock": false, 00:11:18.760 "num_base_bdevs": 3, 00:11:18.760 "num_base_bdevs_discovered": 1, 00:11:18.760 "num_base_bdevs_operational": 3, 00:11:18.760 "base_bdevs_list": [ 00:11:18.760 { 00:11:18.760 "name": "BaseBdev1", 00:11:18.760 "uuid": "6d4700ee-8c7b-408d-a70e-37611f5eabad", 00:11:18.760 "is_configured": true, 00:11:18.760 "data_offset": 0, 00:11:18.760 "data_size": 65536 00:11:18.760 }, 00:11:18.760 { 00:11:18.760 "name": "BaseBdev2", 00:11:18.760 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:18.760 "is_configured": false, 00:11:18.760 "data_offset": 0, 00:11:18.760 "data_size": 0 00:11:18.760 }, 00:11:18.760 { 00:11:18.760 "name": "BaseBdev3", 00:11:18.760 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:18.760 "is_configured": false, 00:11:18.760 "data_offset": 0, 00:11:18.760 "data_size": 0 00:11:18.760 } 00:11:18.760 ] 00:11:18.760 }' 00:11:18.760 21:38:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:18.760 21:38:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.019 21:38:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:19.019 21:38:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.019 21:38:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.019 [2024-12-10 21:38:19.728870] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:19.019 BaseBdev2 00:11:19.019 21:38:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.019 21:38:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:19.019 21:38:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:19.019 21:38:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:19.019 21:38:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:19.019 21:38:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:19.019 21:38:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:19.019 21:38:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:19.019 21:38:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.019 21:38:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.019 21:38:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.019 21:38:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:19.019 21:38:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.019 21:38:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.019 [ 00:11:19.019 { 00:11:19.019 "name": "BaseBdev2", 00:11:19.019 "aliases": [ 00:11:19.019 "330feffa-4acd-467c-9fcd-d4a946e336f9" 00:11:19.019 ], 00:11:19.019 "product_name": "Malloc disk", 00:11:19.019 "block_size": 512, 00:11:19.019 "num_blocks": 65536, 00:11:19.019 "uuid": "330feffa-4acd-467c-9fcd-d4a946e336f9", 00:11:19.019 "assigned_rate_limits": { 00:11:19.019 "rw_ios_per_sec": 0, 00:11:19.019 "rw_mbytes_per_sec": 0, 00:11:19.019 "r_mbytes_per_sec": 0, 00:11:19.019 "w_mbytes_per_sec": 0 00:11:19.019 }, 00:11:19.019 "claimed": true, 00:11:19.019 "claim_type": "exclusive_write", 00:11:19.019 "zoned": false, 00:11:19.019 "supported_io_types": { 00:11:19.019 "read": true, 00:11:19.019 "write": true, 00:11:19.019 "unmap": true, 00:11:19.019 "flush": true, 00:11:19.019 "reset": true, 00:11:19.019 "nvme_admin": false, 00:11:19.019 "nvme_io": false, 00:11:19.019 "nvme_io_md": false, 00:11:19.019 "write_zeroes": true, 00:11:19.019 "zcopy": true, 00:11:19.019 "get_zone_info": false, 00:11:19.019 "zone_management": false, 00:11:19.019 "zone_append": false, 00:11:19.019 "compare": false, 00:11:19.019 "compare_and_write": false, 00:11:19.019 "abort": true, 00:11:19.019 "seek_hole": false, 00:11:19.019 "seek_data": false, 00:11:19.019 "copy": true, 00:11:19.019 "nvme_iov_md": false 00:11:19.019 }, 00:11:19.019 "memory_domains": [ 00:11:19.019 { 00:11:19.019 "dma_device_id": "system", 00:11:19.019 "dma_device_type": 1 00:11:19.019 }, 00:11:19.019 { 00:11:19.019 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:19.019 "dma_device_type": 2 00:11:19.019 } 00:11:19.019 ], 00:11:19.019 "driver_specific": {} 00:11:19.019 } 00:11:19.019 ] 00:11:19.019 21:38:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.019 21:38:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:19.019 21:38:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:19.019 21:38:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:19.019 21:38:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:19.019 21:38:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:19.019 21:38:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:19.019 21:38:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:19.019 21:38:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:19.019 21:38:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:19.019 21:38:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:19.019 21:38:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:19.019 21:38:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:19.019 21:38:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:19.019 21:38:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:19.019 21:38:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.019 21:38:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:19.019 21:38:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.019 21:38:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.278 21:38:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:19.278 "name": "Existed_Raid", 00:11:19.278 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:19.278 "strip_size_kb": 0, 00:11:19.278 "state": "configuring", 00:11:19.278 "raid_level": "raid1", 00:11:19.278 "superblock": false, 00:11:19.278 "num_base_bdevs": 3, 00:11:19.278 "num_base_bdevs_discovered": 2, 00:11:19.278 "num_base_bdevs_operational": 3, 00:11:19.278 "base_bdevs_list": [ 00:11:19.278 { 00:11:19.278 "name": "BaseBdev1", 00:11:19.278 "uuid": "6d4700ee-8c7b-408d-a70e-37611f5eabad", 00:11:19.278 "is_configured": true, 00:11:19.278 "data_offset": 0, 00:11:19.278 "data_size": 65536 00:11:19.278 }, 00:11:19.278 { 00:11:19.278 "name": "BaseBdev2", 00:11:19.278 "uuid": "330feffa-4acd-467c-9fcd-d4a946e336f9", 00:11:19.278 "is_configured": true, 00:11:19.278 "data_offset": 0, 00:11:19.278 "data_size": 65536 00:11:19.278 }, 00:11:19.278 { 00:11:19.278 "name": "BaseBdev3", 00:11:19.278 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:19.278 "is_configured": false, 00:11:19.279 "data_offset": 0, 00:11:19.279 "data_size": 0 00:11:19.279 } 00:11:19.279 ] 00:11:19.279 }' 00:11:19.279 21:38:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:19.279 21:38:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.538 21:38:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:19.538 21:38:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.538 21:38:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.538 [2024-12-10 21:38:20.250407] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:19.538 [2024-12-10 21:38:20.250481] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:19.538 [2024-12-10 21:38:20.250495] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:11:19.538 [2024-12-10 21:38:20.250802] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:11:19.538 [2024-12-10 21:38:20.250986] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:19.538 [2024-12-10 21:38:20.251002] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:19.538 [2024-12-10 21:38:20.251322] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:19.538 BaseBdev3 00:11:19.538 21:38:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.538 21:38:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:19.538 21:38:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:19.538 21:38:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:19.538 21:38:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:19.538 21:38:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:19.538 21:38:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:19.538 21:38:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:19.538 21:38:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.538 21:38:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.538 21:38:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.538 21:38:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:19.538 21:38:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.538 21:38:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.538 [ 00:11:19.538 { 00:11:19.538 "name": "BaseBdev3", 00:11:19.538 "aliases": [ 00:11:19.538 "00d40426-4d15-49bd-b082-1ab32951952b" 00:11:19.538 ], 00:11:19.538 "product_name": "Malloc disk", 00:11:19.538 "block_size": 512, 00:11:19.538 "num_blocks": 65536, 00:11:19.538 "uuid": "00d40426-4d15-49bd-b082-1ab32951952b", 00:11:19.538 "assigned_rate_limits": { 00:11:19.538 "rw_ios_per_sec": 0, 00:11:19.538 "rw_mbytes_per_sec": 0, 00:11:19.538 "r_mbytes_per_sec": 0, 00:11:19.538 "w_mbytes_per_sec": 0 00:11:19.538 }, 00:11:19.538 "claimed": true, 00:11:19.538 "claim_type": "exclusive_write", 00:11:19.538 "zoned": false, 00:11:19.538 "supported_io_types": { 00:11:19.538 "read": true, 00:11:19.538 "write": true, 00:11:19.538 "unmap": true, 00:11:19.538 "flush": true, 00:11:19.538 "reset": true, 00:11:19.538 "nvme_admin": false, 00:11:19.538 "nvme_io": false, 00:11:19.538 "nvme_io_md": false, 00:11:19.538 "write_zeroes": true, 00:11:19.538 "zcopy": true, 00:11:19.538 "get_zone_info": false, 00:11:19.538 "zone_management": false, 00:11:19.538 "zone_append": false, 00:11:19.538 "compare": false, 00:11:19.538 "compare_and_write": false, 00:11:19.538 "abort": true, 00:11:19.538 "seek_hole": false, 00:11:19.538 "seek_data": false, 00:11:19.538 "copy": true, 00:11:19.538 "nvme_iov_md": false 00:11:19.538 }, 00:11:19.538 "memory_domains": [ 00:11:19.538 { 00:11:19.538 "dma_device_id": "system", 00:11:19.539 "dma_device_type": 1 00:11:19.539 }, 00:11:19.539 { 00:11:19.539 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:19.539 "dma_device_type": 2 00:11:19.539 } 00:11:19.539 ], 00:11:19.539 "driver_specific": {} 00:11:19.539 } 00:11:19.539 ] 00:11:19.539 21:38:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.539 21:38:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:19.539 21:38:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:19.539 21:38:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:19.539 21:38:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:11:19.539 21:38:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:19.539 21:38:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:19.539 21:38:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:19.539 21:38:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:19.539 21:38:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:19.539 21:38:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:19.539 21:38:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:19.539 21:38:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:19.539 21:38:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:19.539 21:38:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:19.539 21:38:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:19.539 21:38:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.539 21:38:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.539 21:38:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.798 21:38:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:19.798 "name": "Existed_Raid", 00:11:19.798 "uuid": "b6f353f3-6b41-4d73-b895-c059a7f1ce4d", 00:11:19.798 "strip_size_kb": 0, 00:11:19.798 "state": "online", 00:11:19.798 "raid_level": "raid1", 00:11:19.798 "superblock": false, 00:11:19.798 "num_base_bdevs": 3, 00:11:19.798 "num_base_bdevs_discovered": 3, 00:11:19.798 "num_base_bdevs_operational": 3, 00:11:19.798 "base_bdevs_list": [ 00:11:19.798 { 00:11:19.798 "name": "BaseBdev1", 00:11:19.798 "uuid": "6d4700ee-8c7b-408d-a70e-37611f5eabad", 00:11:19.798 "is_configured": true, 00:11:19.798 "data_offset": 0, 00:11:19.798 "data_size": 65536 00:11:19.798 }, 00:11:19.798 { 00:11:19.798 "name": "BaseBdev2", 00:11:19.798 "uuid": "330feffa-4acd-467c-9fcd-d4a946e336f9", 00:11:19.798 "is_configured": true, 00:11:19.798 "data_offset": 0, 00:11:19.798 "data_size": 65536 00:11:19.798 }, 00:11:19.798 { 00:11:19.798 "name": "BaseBdev3", 00:11:19.798 "uuid": "00d40426-4d15-49bd-b082-1ab32951952b", 00:11:19.798 "is_configured": true, 00:11:19.798 "data_offset": 0, 00:11:19.798 "data_size": 65536 00:11:19.798 } 00:11:19.798 ] 00:11:19.798 }' 00:11:19.798 21:38:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:19.798 21:38:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.057 21:38:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:20.057 21:38:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:20.057 21:38:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:20.057 21:38:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:20.057 21:38:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:20.057 21:38:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:20.057 21:38:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:20.057 21:38:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.057 21:38:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.057 21:38:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:20.057 [2024-12-10 21:38:20.738002] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:20.057 21:38:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.057 21:38:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:20.057 "name": "Existed_Raid", 00:11:20.057 "aliases": [ 00:11:20.057 "b6f353f3-6b41-4d73-b895-c059a7f1ce4d" 00:11:20.057 ], 00:11:20.057 "product_name": "Raid Volume", 00:11:20.057 "block_size": 512, 00:11:20.057 "num_blocks": 65536, 00:11:20.057 "uuid": "b6f353f3-6b41-4d73-b895-c059a7f1ce4d", 00:11:20.057 "assigned_rate_limits": { 00:11:20.057 "rw_ios_per_sec": 0, 00:11:20.057 "rw_mbytes_per_sec": 0, 00:11:20.057 "r_mbytes_per_sec": 0, 00:11:20.057 "w_mbytes_per_sec": 0 00:11:20.057 }, 00:11:20.057 "claimed": false, 00:11:20.057 "zoned": false, 00:11:20.057 "supported_io_types": { 00:11:20.057 "read": true, 00:11:20.057 "write": true, 00:11:20.057 "unmap": false, 00:11:20.057 "flush": false, 00:11:20.057 "reset": true, 00:11:20.057 "nvme_admin": false, 00:11:20.057 "nvme_io": false, 00:11:20.057 "nvme_io_md": false, 00:11:20.057 "write_zeroes": true, 00:11:20.057 "zcopy": false, 00:11:20.057 "get_zone_info": false, 00:11:20.057 "zone_management": false, 00:11:20.057 "zone_append": false, 00:11:20.057 "compare": false, 00:11:20.057 "compare_and_write": false, 00:11:20.057 "abort": false, 00:11:20.057 "seek_hole": false, 00:11:20.057 "seek_data": false, 00:11:20.057 "copy": false, 00:11:20.057 "nvme_iov_md": false 00:11:20.057 }, 00:11:20.057 "memory_domains": [ 00:11:20.057 { 00:11:20.057 "dma_device_id": "system", 00:11:20.057 "dma_device_type": 1 00:11:20.057 }, 00:11:20.057 { 00:11:20.057 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:20.057 "dma_device_type": 2 00:11:20.057 }, 00:11:20.057 { 00:11:20.057 "dma_device_id": "system", 00:11:20.057 "dma_device_type": 1 00:11:20.057 }, 00:11:20.057 { 00:11:20.057 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:20.057 "dma_device_type": 2 00:11:20.057 }, 00:11:20.057 { 00:11:20.057 "dma_device_id": "system", 00:11:20.057 "dma_device_type": 1 00:11:20.057 }, 00:11:20.057 { 00:11:20.057 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:20.057 "dma_device_type": 2 00:11:20.057 } 00:11:20.057 ], 00:11:20.057 "driver_specific": { 00:11:20.057 "raid": { 00:11:20.057 "uuid": "b6f353f3-6b41-4d73-b895-c059a7f1ce4d", 00:11:20.057 "strip_size_kb": 0, 00:11:20.057 "state": "online", 00:11:20.057 "raid_level": "raid1", 00:11:20.057 "superblock": false, 00:11:20.057 "num_base_bdevs": 3, 00:11:20.057 "num_base_bdevs_discovered": 3, 00:11:20.057 "num_base_bdevs_operational": 3, 00:11:20.057 "base_bdevs_list": [ 00:11:20.057 { 00:11:20.057 "name": "BaseBdev1", 00:11:20.057 "uuid": "6d4700ee-8c7b-408d-a70e-37611f5eabad", 00:11:20.057 "is_configured": true, 00:11:20.057 "data_offset": 0, 00:11:20.057 "data_size": 65536 00:11:20.057 }, 00:11:20.057 { 00:11:20.057 "name": "BaseBdev2", 00:11:20.057 "uuid": "330feffa-4acd-467c-9fcd-d4a946e336f9", 00:11:20.057 "is_configured": true, 00:11:20.057 "data_offset": 0, 00:11:20.057 "data_size": 65536 00:11:20.057 }, 00:11:20.057 { 00:11:20.057 "name": "BaseBdev3", 00:11:20.057 "uuid": "00d40426-4d15-49bd-b082-1ab32951952b", 00:11:20.057 "is_configured": true, 00:11:20.057 "data_offset": 0, 00:11:20.057 "data_size": 65536 00:11:20.057 } 00:11:20.057 ] 00:11:20.057 } 00:11:20.057 } 00:11:20.057 }' 00:11:20.057 21:38:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:20.057 21:38:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:20.057 BaseBdev2 00:11:20.057 BaseBdev3' 00:11:20.057 21:38:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:20.317 21:38:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:20.317 21:38:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:20.317 21:38:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:20.317 21:38:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:20.317 21:38:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.317 21:38:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.317 21:38:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.317 21:38:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:20.317 21:38:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:20.317 21:38:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:20.317 21:38:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:20.317 21:38:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:20.317 21:38:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.317 21:38:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.317 21:38:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.317 21:38:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:20.317 21:38:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:20.317 21:38:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:20.317 21:38:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:20.317 21:38:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.317 21:38:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.317 21:38:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:20.317 21:38:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.317 21:38:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:20.317 21:38:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:20.317 21:38:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:20.317 21:38:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.317 21:38:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.317 [2024-12-10 21:38:21.029218] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:20.575 21:38:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.575 21:38:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:20.575 21:38:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:11:20.575 21:38:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:20.575 21:38:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:11:20.575 21:38:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:11:20.575 21:38:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:11:20.575 21:38:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:20.575 21:38:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:20.575 21:38:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:20.575 21:38:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:20.575 21:38:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:20.575 21:38:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:20.575 21:38:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:20.575 21:38:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:20.575 21:38:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:20.575 21:38:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:20.575 21:38:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:20.575 21:38:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.575 21:38:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.575 21:38:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.575 21:38:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:20.575 "name": "Existed_Raid", 00:11:20.575 "uuid": "b6f353f3-6b41-4d73-b895-c059a7f1ce4d", 00:11:20.575 "strip_size_kb": 0, 00:11:20.575 "state": "online", 00:11:20.575 "raid_level": "raid1", 00:11:20.575 "superblock": false, 00:11:20.575 "num_base_bdevs": 3, 00:11:20.575 "num_base_bdevs_discovered": 2, 00:11:20.575 "num_base_bdevs_operational": 2, 00:11:20.575 "base_bdevs_list": [ 00:11:20.575 { 00:11:20.575 "name": null, 00:11:20.575 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:20.575 "is_configured": false, 00:11:20.575 "data_offset": 0, 00:11:20.575 "data_size": 65536 00:11:20.575 }, 00:11:20.575 { 00:11:20.575 "name": "BaseBdev2", 00:11:20.575 "uuid": "330feffa-4acd-467c-9fcd-d4a946e336f9", 00:11:20.575 "is_configured": true, 00:11:20.575 "data_offset": 0, 00:11:20.575 "data_size": 65536 00:11:20.575 }, 00:11:20.575 { 00:11:20.575 "name": "BaseBdev3", 00:11:20.575 "uuid": "00d40426-4d15-49bd-b082-1ab32951952b", 00:11:20.575 "is_configured": true, 00:11:20.575 "data_offset": 0, 00:11:20.575 "data_size": 65536 00:11:20.575 } 00:11:20.575 ] 00:11:20.575 }' 00:11:20.575 21:38:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:20.575 21:38:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.833 21:38:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:20.833 21:38:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:20.833 21:38:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:20.833 21:38:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:20.833 21:38:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.833 21:38:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.833 21:38:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.833 21:38:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:20.833 21:38:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:20.833 21:38:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:20.833 21:38:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.833 21:38:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.833 [2024-12-10 21:38:21.609314] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:21.092 21:38:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.092 21:38:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:21.092 21:38:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:21.092 21:38:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:21.092 21:38:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.092 21:38:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.092 21:38:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:21.092 21:38:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.092 21:38:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:21.092 21:38:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:21.092 21:38:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:21.092 21:38:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.092 21:38:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.092 [2024-12-10 21:38:21.763049] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:21.092 [2024-12-10 21:38:21.763255] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:21.351 [2024-12-10 21:38:21.879833] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:21.351 [2024-12-10 21:38:21.879895] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:21.351 [2024-12-10 21:38:21.879907] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:21.351 21:38:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.351 21:38:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:21.351 21:38:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:21.351 21:38:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:21.351 21:38:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.351 21:38:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.351 21:38:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:21.351 21:38:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.351 21:38:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:21.351 21:38:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:21.351 21:38:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:11:21.351 21:38:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:21.351 21:38:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:21.351 21:38:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:21.351 21:38:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.351 21:38:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.351 BaseBdev2 00:11:21.351 21:38:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.351 21:38:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:21.351 21:38:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:21.351 21:38:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:21.351 21:38:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:21.351 21:38:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:21.351 21:38:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:21.351 21:38:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:21.351 21:38:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.351 21:38:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.351 21:38:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.351 21:38:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:21.351 21:38:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.351 21:38:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.351 [ 00:11:21.351 { 00:11:21.351 "name": "BaseBdev2", 00:11:21.351 "aliases": [ 00:11:21.351 "e3f79375-2798-4a12-bc9c-d04570e4ccc5" 00:11:21.351 ], 00:11:21.351 "product_name": "Malloc disk", 00:11:21.351 "block_size": 512, 00:11:21.351 "num_blocks": 65536, 00:11:21.351 "uuid": "e3f79375-2798-4a12-bc9c-d04570e4ccc5", 00:11:21.351 "assigned_rate_limits": { 00:11:21.351 "rw_ios_per_sec": 0, 00:11:21.351 "rw_mbytes_per_sec": 0, 00:11:21.351 "r_mbytes_per_sec": 0, 00:11:21.351 "w_mbytes_per_sec": 0 00:11:21.351 }, 00:11:21.351 "claimed": false, 00:11:21.351 "zoned": false, 00:11:21.351 "supported_io_types": { 00:11:21.351 "read": true, 00:11:21.351 "write": true, 00:11:21.351 "unmap": true, 00:11:21.351 "flush": true, 00:11:21.351 "reset": true, 00:11:21.351 "nvme_admin": false, 00:11:21.351 "nvme_io": false, 00:11:21.351 "nvme_io_md": false, 00:11:21.351 "write_zeroes": true, 00:11:21.351 "zcopy": true, 00:11:21.351 "get_zone_info": false, 00:11:21.351 "zone_management": false, 00:11:21.351 "zone_append": false, 00:11:21.351 "compare": false, 00:11:21.351 "compare_and_write": false, 00:11:21.351 "abort": true, 00:11:21.351 "seek_hole": false, 00:11:21.351 "seek_data": false, 00:11:21.351 "copy": true, 00:11:21.351 "nvme_iov_md": false 00:11:21.351 }, 00:11:21.351 "memory_domains": [ 00:11:21.351 { 00:11:21.351 "dma_device_id": "system", 00:11:21.351 "dma_device_type": 1 00:11:21.351 }, 00:11:21.351 { 00:11:21.351 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:21.351 "dma_device_type": 2 00:11:21.351 } 00:11:21.351 ], 00:11:21.351 "driver_specific": {} 00:11:21.351 } 00:11:21.351 ] 00:11:21.351 21:38:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.351 21:38:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:21.351 21:38:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:21.351 21:38:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:21.351 21:38:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:21.351 21:38:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.351 21:38:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.351 BaseBdev3 00:11:21.351 21:38:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.351 21:38:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:21.351 21:38:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:21.351 21:38:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:21.351 21:38:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:21.351 21:38:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:21.351 21:38:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:21.351 21:38:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:21.351 21:38:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.351 21:38:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.351 21:38:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.351 21:38:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:21.351 21:38:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.351 21:38:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.351 [ 00:11:21.351 { 00:11:21.351 "name": "BaseBdev3", 00:11:21.351 "aliases": [ 00:11:21.351 "44f69742-c077-41e8-8ab2-13a94f8dbf0b" 00:11:21.351 ], 00:11:21.351 "product_name": "Malloc disk", 00:11:21.351 "block_size": 512, 00:11:21.351 "num_blocks": 65536, 00:11:21.351 "uuid": "44f69742-c077-41e8-8ab2-13a94f8dbf0b", 00:11:21.351 "assigned_rate_limits": { 00:11:21.351 "rw_ios_per_sec": 0, 00:11:21.351 "rw_mbytes_per_sec": 0, 00:11:21.351 "r_mbytes_per_sec": 0, 00:11:21.351 "w_mbytes_per_sec": 0 00:11:21.351 }, 00:11:21.351 "claimed": false, 00:11:21.351 "zoned": false, 00:11:21.351 "supported_io_types": { 00:11:21.352 "read": true, 00:11:21.352 "write": true, 00:11:21.352 "unmap": true, 00:11:21.352 "flush": true, 00:11:21.352 "reset": true, 00:11:21.352 "nvme_admin": false, 00:11:21.352 "nvme_io": false, 00:11:21.352 "nvme_io_md": false, 00:11:21.352 "write_zeroes": true, 00:11:21.352 "zcopy": true, 00:11:21.352 "get_zone_info": false, 00:11:21.352 "zone_management": false, 00:11:21.352 "zone_append": false, 00:11:21.352 "compare": false, 00:11:21.352 "compare_and_write": false, 00:11:21.352 "abort": true, 00:11:21.352 "seek_hole": false, 00:11:21.352 "seek_data": false, 00:11:21.352 "copy": true, 00:11:21.352 "nvme_iov_md": false 00:11:21.352 }, 00:11:21.352 "memory_domains": [ 00:11:21.352 { 00:11:21.352 "dma_device_id": "system", 00:11:21.352 "dma_device_type": 1 00:11:21.352 }, 00:11:21.352 { 00:11:21.352 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:21.352 "dma_device_type": 2 00:11:21.352 } 00:11:21.352 ], 00:11:21.352 "driver_specific": {} 00:11:21.352 } 00:11:21.352 ] 00:11:21.352 21:38:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.352 21:38:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:21.352 21:38:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:21.352 21:38:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:21.352 21:38:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:21.352 21:38:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.352 21:38:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.352 [2024-12-10 21:38:22.093885] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:21.352 [2024-12-10 21:38:22.093994] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:21.352 [2024-12-10 21:38:22.094047] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:21.352 [2024-12-10 21:38:22.096058] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:21.352 21:38:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.352 21:38:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:21.352 21:38:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:21.352 21:38:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:21.352 21:38:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:21.352 21:38:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:21.352 21:38:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:21.352 21:38:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:21.352 21:38:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:21.352 21:38:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:21.352 21:38:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:21.352 21:38:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:21.352 21:38:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:21.352 21:38:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.352 21:38:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.352 21:38:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.610 21:38:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:21.610 "name": "Existed_Raid", 00:11:21.610 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:21.610 "strip_size_kb": 0, 00:11:21.610 "state": "configuring", 00:11:21.610 "raid_level": "raid1", 00:11:21.610 "superblock": false, 00:11:21.610 "num_base_bdevs": 3, 00:11:21.610 "num_base_bdevs_discovered": 2, 00:11:21.610 "num_base_bdevs_operational": 3, 00:11:21.610 "base_bdevs_list": [ 00:11:21.610 { 00:11:21.610 "name": "BaseBdev1", 00:11:21.610 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:21.610 "is_configured": false, 00:11:21.610 "data_offset": 0, 00:11:21.610 "data_size": 0 00:11:21.610 }, 00:11:21.610 { 00:11:21.610 "name": "BaseBdev2", 00:11:21.610 "uuid": "e3f79375-2798-4a12-bc9c-d04570e4ccc5", 00:11:21.610 "is_configured": true, 00:11:21.610 "data_offset": 0, 00:11:21.610 "data_size": 65536 00:11:21.610 }, 00:11:21.610 { 00:11:21.610 "name": "BaseBdev3", 00:11:21.610 "uuid": "44f69742-c077-41e8-8ab2-13a94f8dbf0b", 00:11:21.610 "is_configured": true, 00:11:21.610 "data_offset": 0, 00:11:21.610 "data_size": 65536 00:11:21.610 } 00:11:21.610 ] 00:11:21.610 }' 00:11:21.610 21:38:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:21.610 21:38:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.869 21:38:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:21.869 21:38:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.869 21:38:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.869 [2024-12-10 21:38:22.557139] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:21.869 21:38:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.869 21:38:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:21.869 21:38:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:21.869 21:38:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:21.869 21:38:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:21.869 21:38:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:21.869 21:38:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:21.869 21:38:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:21.869 21:38:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:21.869 21:38:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:21.869 21:38:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:21.869 21:38:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:21.869 21:38:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.869 21:38:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.869 21:38:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:21.869 21:38:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.869 21:38:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:21.869 "name": "Existed_Raid", 00:11:21.869 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:21.869 "strip_size_kb": 0, 00:11:21.869 "state": "configuring", 00:11:21.869 "raid_level": "raid1", 00:11:21.869 "superblock": false, 00:11:21.869 "num_base_bdevs": 3, 00:11:21.869 "num_base_bdevs_discovered": 1, 00:11:21.869 "num_base_bdevs_operational": 3, 00:11:21.869 "base_bdevs_list": [ 00:11:21.869 { 00:11:21.869 "name": "BaseBdev1", 00:11:21.869 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:21.869 "is_configured": false, 00:11:21.869 "data_offset": 0, 00:11:21.869 "data_size": 0 00:11:21.869 }, 00:11:21.869 { 00:11:21.869 "name": null, 00:11:21.869 "uuid": "e3f79375-2798-4a12-bc9c-d04570e4ccc5", 00:11:21.869 "is_configured": false, 00:11:21.869 "data_offset": 0, 00:11:21.869 "data_size": 65536 00:11:21.869 }, 00:11:21.869 { 00:11:21.869 "name": "BaseBdev3", 00:11:21.869 "uuid": "44f69742-c077-41e8-8ab2-13a94f8dbf0b", 00:11:21.869 "is_configured": true, 00:11:21.869 "data_offset": 0, 00:11:21.869 "data_size": 65536 00:11:21.869 } 00:11:21.869 ] 00:11:21.869 }' 00:11:21.869 21:38:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:21.869 21:38:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.437 21:38:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:22.437 21:38:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.437 21:38:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.437 21:38:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:22.437 21:38:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.437 21:38:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:22.437 21:38:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:22.437 21:38:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.437 21:38:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.437 [2024-12-10 21:38:23.104715] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:22.437 BaseBdev1 00:11:22.437 21:38:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.437 21:38:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:22.437 21:38:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:22.437 21:38:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:22.437 21:38:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:22.437 21:38:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:22.437 21:38:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:22.437 21:38:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:22.437 21:38:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.437 21:38:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.437 21:38:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.437 21:38:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:22.437 21:38:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.437 21:38:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.437 [ 00:11:22.437 { 00:11:22.437 "name": "BaseBdev1", 00:11:22.437 "aliases": [ 00:11:22.437 "6563b961-2fcd-4f05-9ec8-09f47f37480a" 00:11:22.437 ], 00:11:22.437 "product_name": "Malloc disk", 00:11:22.437 "block_size": 512, 00:11:22.437 "num_blocks": 65536, 00:11:22.437 "uuid": "6563b961-2fcd-4f05-9ec8-09f47f37480a", 00:11:22.437 "assigned_rate_limits": { 00:11:22.437 "rw_ios_per_sec": 0, 00:11:22.437 "rw_mbytes_per_sec": 0, 00:11:22.437 "r_mbytes_per_sec": 0, 00:11:22.437 "w_mbytes_per_sec": 0 00:11:22.437 }, 00:11:22.437 "claimed": true, 00:11:22.437 "claim_type": "exclusive_write", 00:11:22.437 "zoned": false, 00:11:22.437 "supported_io_types": { 00:11:22.437 "read": true, 00:11:22.437 "write": true, 00:11:22.437 "unmap": true, 00:11:22.437 "flush": true, 00:11:22.437 "reset": true, 00:11:22.437 "nvme_admin": false, 00:11:22.437 "nvme_io": false, 00:11:22.437 "nvme_io_md": false, 00:11:22.437 "write_zeroes": true, 00:11:22.437 "zcopy": true, 00:11:22.437 "get_zone_info": false, 00:11:22.437 "zone_management": false, 00:11:22.437 "zone_append": false, 00:11:22.437 "compare": false, 00:11:22.437 "compare_and_write": false, 00:11:22.437 "abort": true, 00:11:22.437 "seek_hole": false, 00:11:22.437 "seek_data": false, 00:11:22.437 "copy": true, 00:11:22.437 "nvme_iov_md": false 00:11:22.437 }, 00:11:22.437 "memory_domains": [ 00:11:22.437 { 00:11:22.437 "dma_device_id": "system", 00:11:22.437 "dma_device_type": 1 00:11:22.437 }, 00:11:22.437 { 00:11:22.437 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:22.437 "dma_device_type": 2 00:11:22.437 } 00:11:22.437 ], 00:11:22.437 "driver_specific": {} 00:11:22.437 } 00:11:22.437 ] 00:11:22.437 21:38:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.437 21:38:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:22.437 21:38:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:22.437 21:38:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:22.437 21:38:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:22.437 21:38:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:22.437 21:38:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:22.437 21:38:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:22.437 21:38:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:22.437 21:38:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:22.437 21:38:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:22.437 21:38:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:22.437 21:38:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:22.437 21:38:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:22.437 21:38:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.437 21:38:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.438 21:38:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.438 21:38:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:22.438 "name": "Existed_Raid", 00:11:22.438 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:22.438 "strip_size_kb": 0, 00:11:22.438 "state": "configuring", 00:11:22.438 "raid_level": "raid1", 00:11:22.438 "superblock": false, 00:11:22.438 "num_base_bdevs": 3, 00:11:22.438 "num_base_bdevs_discovered": 2, 00:11:22.438 "num_base_bdevs_operational": 3, 00:11:22.438 "base_bdevs_list": [ 00:11:22.438 { 00:11:22.438 "name": "BaseBdev1", 00:11:22.438 "uuid": "6563b961-2fcd-4f05-9ec8-09f47f37480a", 00:11:22.438 "is_configured": true, 00:11:22.438 "data_offset": 0, 00:11:22.438 "data_size": 65536 00:11:22.438 }, 00:11:22.438 { 00:11:22.438 "name": null, 00:11:22.438 "uuid": "e3f79375-2798-4a12-bc9c-d04570e4ccc5", 00:11:22.438 "is_configured": false, 00:11:22.438 "data_offset": 0, 00:11:22.438 "data_size": 65536 00:11:22.438 }, 00:11:22.438 { 00:11:22.438 "name": "BaseBdev3", 00:11:22.438 "uuid": "44f69742-c077-41e8-8ab2-13a94f8dbf0b", 00:11:22.438 "is_configured": true, 00:11:22.438 "data_offset": 0, 00:11:22.438 "data_size": 65536 00:11:22.438 } 00:11:22.438 ] 00:11:22.438 }' 00:11:22.438 21:38:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:22.438 21:38:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.005 21:38:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:23.005 21:38:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.006 21:38:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.006 21:38:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:23.006 21:38:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.006 21:38:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:23.006 21:38:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:23.006 21:38:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.006 21:38:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.006 [2024-12-10 21:38:23.631867] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:23.006 21:38:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.006 21:38:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:23.006 21:38:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:23.006 21:38:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:23.006 21:38:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:23.006 21:38:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:23.006 21:38:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:23.006 21:38:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:23.006 21:38:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:23.006 21:38:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:23.006 21:38:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:23.006 21:38:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:23.006 21:38:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:23.006 21:38:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.006 21:38:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.006 21:38:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.006 21:38:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:23.006 "name": "Existed_Raid", 00:11:23.006 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:23.006 "strip_size_kb": 0, 00:11:23.006 "state": "configuring", 00:11:23.006 "raid_level": "raid1", 00:11:23.006 "superblock": false, 00:11:23.006 "num_base_bdevs": 3, 00:11:23.006 "num_base_bdevs_discovered": 1, 00:11:23.006 "num_base_bdevs_operational": 3, 00:11:23.006 "base_bdevs_list": [ 00:11:23.006 { 00:11:23.006 "name": "BaseBdev1", 00:11:23.006 "uuid": "6563b961-2fcd-4f05-9ec8-09f47f37480a", 00:11:23.006 "is_configured": true, 00:11:23.006 "data_offset": 0, 00:11:23.006 "data_size": 65536 00:11:23.006 }, 00:11:23.006 { 00:11:23.006 "name": null, 00:11:23.006 "uuid": "e3f79375-2798-4a12-bc9c-d04570e4ccc5", 00:11:23.006 "is_configured": false, 00:11:23.006 "data_offset": 0, 00:11:23.006 "data_size": 65536 00:11:23.006 }, 00:11:23.006 { 00:11:23.006 "name": null, 00:11:23.006 "uuid": "44f69742-c077-41e8-8ab2-13a94f8dbf0b", 00:11:23.006 "is_configured": false, 00:11:23.006 "data_offset": 0, 00:11:23.006 "data_size": 65536 00:11:23.006 } 00:11:23.006 ] 00:11:23.006 }' 00:11:23.006 21:38:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:23.006 21:38:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.574 21:38:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:23.574 21:38:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:23.574 21:38:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.574 21:38:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.574 21:38:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.574 21:38:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:23.574 21:38:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:23.574 21:38:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.574 21:38:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.574 [2024-12-10 21:38:24.099134] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:23.574 21:38:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.574 21:38:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:23.574 21:38:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:23.574 21:38:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:23.574 21:38:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:23.574 21:38:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:23.574 21:38:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:23.574 21:38:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:23.574 21:38:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:23.574 21:38:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:23.574 21:38:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:23.574 21:38:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:23.574 21:38:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:23.574 21:38:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.574 21:38:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.574 21:38:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.574 21:38:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:23.574 "name": "Existed_Raid", 00:11:23.574 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:23.574 "strip_size_kb": 0, 00:11:23.574 "state": "configuring", 00:11:23.574 "raid_level": "raid1", 00:11:23.574 "superblock": false, 00:11:23.574 "num_base_bdevs": 3, 00:11:23.574 "num_base_bdevs_discovered": 2, 00:11:23.574 "num_base_bdevs_operational": 3, 00:11:23.574 "base_bdevs_list": [ 00:11:23.574 { 00:11:23.574 "name": "BaseBdev1", 00:11:23.574 "uuid": "6563b961-2fcd-4f05-9ec8-09f47f37480a", 00:11:23.574 "is_configured": true, 00:11:23.574 "data_offset": 0, 00:11:23.574 "data_size": 65536 00:11:23.574 }, 00:11:23.574 { 00:11:23.574 "name": null, 00:11:23.574 "uuid": "e3f79375-2798-4a12-bc9c-d04570e4ccc5", 00:11:23.574 "is_configured": false, 00:11:23.574 "data_offset": 0, 00:11:23.574 "data_size": 65536 00:11:23.574 }, 00:11:23.574 { 00:11:23.574 "name": "BaseBdev3", 00:11:23.574 "uuid": "44f69742-c077-41e8-8ab2-13a94f8dbf0b", 00:11:23.574 "is_configured": true, 00:11:23.574 "data_offset": 0, 00:11:23.574 "data_size": 65536 00:11:23.574 } 00:11:23.574 ] 00:11:23.574 }' 00:11:23.574 21:38:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:23.574 21:38:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.834 21:38:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:23.834 21:38:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.834 21:38:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:23.834 21:38:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.834 21:38:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.834 21:38:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:23.834 21:38:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:23.834 21:38:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.834 21:38:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.834 [2024-12-10 21:38:24.538452] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:24.093 21:38:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.093 21:38:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:24.093 21:38:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:24.093 21:38:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:24.093 21:38:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:24.093 21:38:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:24.093 21:38:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:24.093 21:38:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:24.093 21:38:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:24.093 21:38:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:24.093 21:38:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:24.093 21:38:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:24.093 21:38:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.093 21:38:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.093 21:38:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:24.093 21:38:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.093 21:38:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:24.093 "name": "Existed_Raid", 00:11:24.093 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:24.093 "strip_size_kb": 0, 00:11:24.093 "state": "configuring", 00:11:24.093 "raid_level": "raid1", 00:11:24.093 "superblock": false, 00:11:24.093 "num_base_bdevs": 3, 00:11:24.093 "num_base_bdevs_discovered": 1, 00:11:24.093 "num_base_bdevs_operational": 3, 00:11:24.093 "base_bdevs_list": [ 00:11:24.093 { 00:11:24.093 "name": null, 00:11:24.093 "uuid": "6563b961-2fcd-4f05-9ec8-09f47f37480a", 00:11:24.093 "is_configured": false, 00:11:24.093 "data_offset": 0, 00:11:24.093 "data_size": 65536 00:11:24.093 }, 00:11:24.093 { 00:11:24.093 "name": null, 00:11:24.093 "uuid": "e3f79375-2798-4a12-bc9c-d04570e4ccc5", 00:11:24.093 "is_configured": false, 00:11:24.093 "data_offset": 0, 00:11:24.093 "data_size": 65536 00:11:24.093 }, 00:11:24.093 { 00:11:24.093 "name": "BaseBdev3", 00:11:24.093 "uuid": "44f69742-c077-41e8-8ab2-13a94f8dbf0b", 00:11:24.093 "is_configured": true, 00:11:24.093 "data_offset": 0, 00:11:24.093 "data_size": 65536 00:11:24.093 } 00:11:24.093 ] 00:11:24.093 }' 00:11:24.093 21:38:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:24.093 21:38:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.355 21:38:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:24.355 21:38:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:24.355 21:38:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.355 21:38:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.355 21:38:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.355 21:38:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:24.355 21:38:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:24.355 21:38:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.355 21:38:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.355 [2024-12-10 21:38:25.112581] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:24.355 21:38:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.355 21:38:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:24.355 21:38:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:24.355 21:38:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:24.355 21:38:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:24.355 21:38:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:24.355 21:38:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:24.355 21:38:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:24.355 21:38:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:24.355 21:38:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:24.355 21:38:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:24.355 21:38:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:24.355 21:38:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:24.355 21:38:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.355 21:38:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.616 21:38:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.616 21:38:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:24.616 "name": "Existed_Raid", 00:11:24.616 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:24.616 "strip_size_kb": 0, 00:11:24.616 "state": "configuring", 00:11:24.616 "raid_level": "raid1", 00:11:24.616 "superblock": false, 00:11:24.616 "num_base_bdevs": 3, 00:11:24.616 "num_base_bdevs_discovered": 2, 00:11:24.616 "num_base_bdevs_operational": 3, 00:11:24.616 "base_bdevs_list": [ 00:11:24.616 { 00:11:24.616 "name": null, 00:11:24.616 "uuid": "6563b961-2fcd-4f05-9ec8-09f47f37480a", 00:11:24.616 "is_configured": false, 00:11:24.616 "data_offset": 0, 00:11:24.616 "data_size": 65536 00:11:24.616 }, 00:11:24.616 { 00:11:24.616 "name": "BaseBdev2", 00:11:24.616 "uuid": "e3f79375-2798-4a12-bc9c-d04570e4ccc5", 00:11:24.616 "is_configured": true, 00:11:24.616 "data_offset": 0, 00:11:24.616 "data_size": 65536 00:11:24.616 }, 00:11:24.616 { 00:11:24.616 "name": "BaseBdev3", 00:11:24.616 "uuid": "44f69742-c077-41e8-8ab2-13a94f8dbf0b", 00:11:24.616 "is_configured": true, 00:11:24.616 "data_offset": 0, 00:11:24.616 "data_size": 65536 00:11:24.616 } 00:11:24.616 ] 00:11:24.616 }' 00:11:24.616 21:38:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:24.616 21:38:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.875 21:38:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:24.875 21:38:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.875 21:38:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.875 21:38:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:24.875 21:38:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.875 21:38:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:24.875 21:38:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:24.875 21:38:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:24.875 21:38:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.875 21:38:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.875 21:38:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.875 21:38:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 6563b961-2fcd-4f05-9ec8-09f47f37480a 00:11:24.875 21:38:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.875 21:38:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.875 [2024-12-10 21:38:25.601813] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:24.875 [2024-12-10 21:38:25.601879] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:24.875 [2024-12-10 21:38:25.601887] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:11:24.875 [2024-12-10 21:38:25.602143] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:11:24.875 [2024-12-10 21:38:25.602281] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:24.875 [2024-12-10 21:38:25.602302] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:11:24.875 [2024-12-10 21:38:25.602610] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:24.875 NewBaseBdev 00:11:24.875 21:38:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.875 21:38:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:24.875 21:38:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:11:24.875 21:38:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:24.875 21:38:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:24.875 21:38:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:24.875 21:38:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:24.876 21:38:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:24.876 21:38:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.876 21:38:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.876 21:38:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.876 21:38:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:24.876 21:38:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.876 21:38:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.876 [ 00:11:24.876 { 00:11:24.876 "name": "NewBaseBdev", 00:11:24.876 "aliases": [ 00:11:24.876 "6563b961-2fcd-4f05-9ec8-09f47f37480a" 00:11:24.876 ], 00:11:24.876 "product_name": "Malloc disk", 00:11:24.876 "block_size": 512, 00:11:24.876 "num_blocks": 65536, 00:11:24.876 "uuid": "6563b961-2fcd-4f05-9ec8-09f47f37480a", 00:11:24.876 "assigned_rate_limits": { 00:11:24.876 "rw_ios_per_sec": 0, 00:11:24.876 "rw_mbytes_per_sec": 0, 00:11:24.876 "r_mbytes_per_sec": 0, 00:11:24.876 "w_mbytes_per_sec": 0 00:11:24.876 }, 00:11:24.876 "claimed": true, 00:11:24.876 "claim_type": "exclusive_write", 00:11:24.876 "zoned": false, 00:11:24.876 "supported_io_types": { 00:11:24.876 "read": true, 00:11:24.876 "write": true, 00:11:24.876 "unmap": true, 00:11:24.876 "flush": true, 00:11:24.876 "reset": true, 00:11:24.876 "nvme_admin": false, 00:11:24.876 "nvme_io": false, 00:11:24.876 "nvme_io_md": false, 00:11:24.876 "write_zeroes": true, 00:11:24.876 "zcopy": true, 00:11:24.876 "get_zone_info": false, 00:11:24.876 "zone_management": false, 00:11:24.876 "zone_append": false, 00:11:24.876 "compare": false, 00:11:24.876 "compare_and_write": false, 00:11:24.876 "abort": true, 00:11:24.876 "seek_hole": false, 00:11:24.876 "seek_data": false, 00:11:24.876 "copy": true, 00:11:24.876 "nvme_iov_md": false 00:11:24.876 }, 00:11:24.876 "memory_domains": [ 00:11:24.876 { 00:11:24.876 "dma_device_id": "system", 00:11:24.876 "dma_device_type": 1 00:11:24.876 }, 00:11:24.876 { 00:11:24.876 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:24.876 "dma_device_type": 2 00:11:24.876 } 00:11:24.876 ], 00:11:24.876 "driver_specific": {} 00:11:24.876 } 00:11:24.876 ] 00:11:24.876 21:38:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.876 21:38:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:24.876 21:38:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:11:24.876 21:38:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:24.876 21:38:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:24.876 21:38:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:24.876 21:38:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:24.876 21:38:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:24.876 21:38:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:24.876 21:38:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:24.876 21:38:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:24.876 21:38:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:24.876 21:38:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:24.876 21:38:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.876 21:38:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.876 21:38:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:25.135 21:38:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.135 21:38:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:25.135 "name": "Existed_Raid", 00:11:25.135 "uuid": "09cbae43-73d1-438d-b1c6-4b87391828c3", 00:11:25.135 "strip_size_kb": 0, 00:11:25.135 "state": "online", 00:11:25.135 "raid_level": "raid1", 00:11:25.135 "superblock": false, 00:11:25.135 "num_base_bdevs": 3, 00:11:25.135 "num_base_bdevs_discovered": 3, 00:11:25.135 "num_base_bdevs_operational": 3, 00:11:25.135 "base_bdevs_list": [ 00:11:25.135 { 00:11:25.135 "name": "NewBaseBdev", 00:11:25.135 "uuid": "6563b961-2fcd-4f05-9ec8-09f47f37480a", 00:11:25.135 "is_configured": true, 00:11:25.135 "data_offset": 0, 00:11:25.135 "data_size": 65536 00:11:25.135 }, 00:11:25.135 { 00:11:25.135 "name": "BaseBdev2", 00:11:25.135 "uuid": "e3f79375-2798-4a12-bc9c-d04570e4ccc5", 00:11:25.135 "is_configured": true, 00:11:25.135 "data_offset": 0, 00:11:25.135 "data_size": 65536 00:11:25.135 }, 00:11:25.135 { 00:11:25.135 "name": "BaseBdev3", 00:11:25.135 "uuid": "44f69742-c077-41e8-8ab2-13a94f8dbf0b", 00:11:25.135 "is_configured": true, 00:11:25.135 "data_offset": 0, 00:11:25.135 "data_size": 65536 00:11:25.135 } 00:11:25.135 ] 00:11:25.135 }' 00:11:25.135 21:38:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:25.135 21:38:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.393 21:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:25.393 21:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:25.393 21:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:25.393 21:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:25.393 21:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:25.393 21:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:25.393 21:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:25.393 21:38:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.393 21:38:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.393 21:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:25.393 [2024-12-10 21:38:26.113350] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:25.393 21:38:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.393 21:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:25.393 "name": "Existed_Raid", 00:11:25.393 "aliases": [ 00:11:25.393 "09cbae43-73d1-438d-b1c6-4b87391828c3" 00:11:25.393 ], 00:11:25.393 "product_name": "Raid Volume", 00:11:25.393 "block_size": 512, 00:11:25.393 "num_blocks": 65536, 00:11:25.393 "uuid": "09cbae43-73d1-438d-b1c6-4b87391828c3", 00:11:25.393 "assigned_rate_limits": { 00:11:25.393 "rw_ios_per_sec": 0, 00:11:25.393 "rw_mbytes_per_sec": 0, 00:11:25.393 "r_mbytes_per_sec": 0, 00:11:25.393 "w_mbytes_per_sec": 0 00:11:25.393 }, 00:11:25.393 "claimed": false, 00:11:25.393 "zoned": false, 00:11:25.393 "supported_io_types": { 00:11:25.393 "read": true, 00:11:25.393 "write": true, 00:11:25.393 "unmap": false, 00:11:25.393 "flush": false, 00:11:25.393 "reset": true, 00:11:25.393 "nvme_admin": false, 00:11:25.393 "nvme_io": false, 00:11:25.393 "nvme_io_md": false, 00:11:25.393 "write_zeroes": true, 00:11:25.393 "zcopy": false, 00:11:25.393 "get_zone_info": false, 00:11:25.393 "zone_management": false, 00:11:25.393 "zone_append": false, 00:11:25.393 "compare": false, 00:11:25.393 "compare_and_write": false, 00:11:25.393 "abort": false, 00:11:25.393 "seek_hole": false, 00:11:25.393 "seek_data": false, 00:11:25.393 "copy": false, 00:11:25.393 "nvme_iov_md": false 00:11:25.393 }, 00:11:25.393 "memory_domains": [ 00:11:25.393 { 00:11:25.393 "dma_device_id": "system", 00:11:25.393 "dma_device_type": 1 00:11:25.393 }, 00:11:25.393 { 00:11:25.393 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:25.393 "dma_device_type": 2 00:11:25.393 }, 00:11:25.393 { 00:11:25.393 "dma_device_id": "system", 00:11:25.393 "dma_device_type": 1 00:11:25.393 }, 00:11:25.393 { 00:11:25.393 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:25.393 "dma_device_type": 2 00:11:25.393 }, 00:11:25.393 { 00:11:25.393 "dma_device_id": "system", 00:11:25.393 "dma_device_type": 1 00:11:25.393 }, 00:11:25.393 { 00:11:25.393 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:25.393 "dma_device_type": 2 00:11:25.393 } 00:11:25.393 ], 00:11:25.393 "driver_specific": { 00:11:25.393 "raid": { 00:11:25.393 "uuid": "09cbae43-73d1-438d-b1c6-4b87391828c3", 00:11:25.393 "strip_size_kb": 0, 00:11:25.393 "state": "online", 00:11:25.393 "raid_level": "raid1", 00:11:25.393 "superblock": false, 00:11:25.393 "num_base_bdevs": 3, 00:11:25.393 "num_base_bdevs_discovered": 3, 00:11:25.393 "num_base_bdevs_operational": 3, 00:11:25.393 "base_bdevs_list": [ 00:11:25.393 { 00:11:25.393 "name": "NewBaseBdev", 00:11:25.393 "uuid": "6563b961-2fcd-4f05-9ec8-09f47f37480a", 00:11:25.393 "is_configured": true, 00:11:25.393 "data_offset": 0, 00:11:25.393 "data_size": 65536 00:11:25.393 }, 00:11:25.393 { 00:11:25.393 "name": "BaseBdev2", 00:11:25.393 "uuid": "e3f79375-2798-4a12-bc9c-d04570e4ccc5", 00:11:25.393 "is_configured": true, 00:11:25.393 "data_offset": 0, 00:11:25.393 "data_size": 65536 00:11:25.393 }, 00:11:25.393 { 00:11:25.393 "name": "BaseBdev3", 00:11:25.393 "uuid": "44f69742-c077-41e8-8ab2-13a94f8dbf0b", 00:11:25.393 "is_configured": true, 00:11:25.393 "data_offset": 0, 00:11:25.393 "data_size": 65536 00:11:25.393 } 00:11:25.393 ] 00:11:25.393 } 00:11:25.393 } 00:11:25.393 }' 00:11:25.393 21:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:25.652 21:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:25.652 BaseBdev2 00:11:25.652 BaseBdev3' 00:11:25.652 21:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:25.652 21:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:25.652 21:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:25.652 21:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:25.652 21:38:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.652 21:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:25.652 21:38:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.652 21:38:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.652 21:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:25.652 21:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:25.652 21:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:25.652 21:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:25.652 21:38:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.652 21:38:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.652 21:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:25.652 21:38:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.652 21:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:25.652 21:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:25.652 21:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:25.652 21:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:25.652 21:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:25.652 21:38:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.652 21:38:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.652 21:38:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.652 21:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:25.652 21:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:25.652 21:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:25.652 21:38:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.652 21:38:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.652 [2024-12-10 21:38:26.372614] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:25.652 [2024-12-10 21:38:26.372653] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:25.652 [2024-12-10 21:38:26.372739] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:25.652 [2024-12-10 21:38:26.373062] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:25.652 [2024-12-10 21:38:26.373083] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:11:25.652 21:38:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.652 21:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 67514 00:11:25.652 21:38:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 67514 ']' 00:11:25.652 21:38:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 67514 00:11:25.652 21:38:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:11:25.652 21:38:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:25.652 21:38:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67514 00:11:25.652 killing process with pid 67514 00:11:25.652 21:38:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:25.652 21:38:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:25.652 21:38:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67514' 00:11:25.652 21:38:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 67514 00:11:25.652 [2024-12-10 21:38:26.418034] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:25.652 21:38:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 67514 00:11:26.216 [2024-12-10 21:38:26.726999] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:27.148 ************************************ 00:11:27.148 END TEST raid_state_function_test 00:11:27.148 ************************************ 00:11:27.148 21:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:11:27.148 00:11:27.148 real 0m10.603s 00:11:27.148 user 0m16.732s 00:11:27.148 sys 0m1.899s 00:11:27.148 21:38:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:27.148 21:38:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.148 21:38:27 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 3 true 00:11:27.148 21:38:27 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:27.148 21:38:27 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:27.148 21:38:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:27.148 ************************************ 00:11:27.148 START TEST raid_state_function_test_sb 00:11:27.148 ************************************ 00:11:27.148 21:38:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 3 true 00:11:27.148 21:38:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:11:27.148 21:38:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:11:27.148 21:38:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:11:27.148 21:38:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:27.406 21:38:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:27.406 21:38:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:27.406 21:38:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:27.406 21:38:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:27.406 21:38:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:27.406 21:38:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:27.406 21:38:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:27.406 21:38:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:27.406 21:38:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:27.406 21:38:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:27.406 21:38:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:27.406 21:38:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:11:27.406 21:38:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:27.406 21:38:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:27.406 21:38:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:27.406 21:38:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:27.406 21:38:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:27.406 21:38:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:11:27.406 21:38:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:11:27.406 21:38:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:11:27.406 21:38:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:11:27.406 21:38:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=68140 00:11:27.406 21:38:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:27.406 21:38:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 68140' 00:11:27.406 Process raid pid: 68140 00:11:27.406 21:38:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 68140 00:11:27.406 21:38:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 68140 ']' 00:11:27.406 21:38:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:27.406 21:38:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:27.406 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:27.406 21:38:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:27.406 21:38:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:27.406 21:38:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.406 [2024-12-10 21:38:28.028624] Starting SPDK v25.01-pre git sha1 cec5ba284 / DPDK 24.03.0 initialization... 00:11:27.406 [2024-12-10 21:38:28.028743] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:27.664 [2024-12-10 21:38:28.204003] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:27.664 [2024-12-10 21:38:28.323117] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:11:27.922 [2024-12-10 21:38:28.534090] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:27.922 [2024-12-10 21:38:28.534150] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:28.180 21:38:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:28.180 21:38:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:11:28.180 21:38:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:28.180 21:38:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.180 21:38:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.180 [2024-12-10 21:38:28.886340] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:28.180 [2024-12-10 21:38:28.886400] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:28.180 [2024-12-10 21:38:28.886410] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:28.180 [2024-12-10 21:38:28.886435] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:28.180 [2024-12-10 21:38:28.886448] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:28.180 [2024-12-10 21:38:28.886457] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:28.180 21:38:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.180 21:38:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:28.180 21:38:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:28.180 21:38:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:28.181 21:38:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:28.181 21:38:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:28.181 21:38:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:28.181 21:38:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:28.181 21:38:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:28.181 21:38:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:28.181 21:38:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:28.181 21:38:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:28.181 21:38:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:28.181 21:38:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.181 21:38:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.181 21:38:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.181 21:38:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:28.181 "name": "Existed_Raid", 00:11:28.181 "uuid": "256e7784-6902-412a-acf0-d53430b40c95", 00:11:28.181 "strip_size_kb": 0, 00:11:28.181 "state": "configuring", 00:11:28.181 "raid_level": "raid1", 00:11:28.181 "superblock": true, 00:11:28.181 "num_base_bdevs": 3, 00:11:28.181 "num_base_bdevs_discovered": 0, 00:11:28.181 "num_base_bdevs_operational": 3, 00:11:28.181 "base_bdevs_list": [ 00:11:28.181 { 00:11:28.181 "name": "BaseBdev1", 00:11:28.181 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:28.181 "is_configured": false, 00:11:28.181 "data_offset": 0, 00:11:28.181 "data_size": 0 00:11:28.181 }, 00:11:28.181 { 00:11:28.181 "name": "BaseBdev2", 00:11:28.181 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:28.181 "is_configured": false, 00:11:28.181 "data_offset": 0, 00:11:28.181 "data_size": 0 00:11:28.181 }, 00:11:28.181 { 00:11:28.181 "name": "BaseBdev3", 00:11:28.181 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:28.181 "is_configured": false, 00:11:28.181 "data_offset": 0, 00:11:28.181 "data_size": 0 00:11:28.181 } 00:11:28.181 ] 00:11:28.181 }' 00:11:28.181 21:38:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:28.181 21:38:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.746 21:38:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:28.746 21:38:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.746 21:38:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.746 [2024-12-10 21:38:29.313590] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:28.746 [2024-12-10 21:38:29.313730] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:28.746 21:38:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.746 21:38:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:28.746 21:38:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.746 21:38:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.746 [2024-12-10 21:38:29.325576] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:28.746 [2024-12-10 21:38:29.325711] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:28.746 [2024-12-10 21:38:29.325748] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:28.746 [2024-12-10 21:38:29.325774] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:28.746 [2024-12-10 21:38:29.325819] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:28.746 [2024-12-10 21:38:29.325844] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:28.746 21:38:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.746 21:38:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:28.746 21:38:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.746 21:38:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.746 [2024-12-10 21:38:29.378843] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:28.746 BaseBdev1 00:11:28.746 21:38:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.746 21:38:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:28.746 21:38:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:28.746 21:38:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:28.746 21:38:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:28.746 21:38:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:28.746 21:38:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:28.746 21:38:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:28.746 21:38:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.746 21:38:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.746 21:38:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.746 21:38:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:28.746 21:38:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.746 21:38:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.746 [ 00:11:28.746 { 00:11:28.746 "name": "BaseBdev1", 00:11:28.746 "aliases": [ 00:11:28.746 "6c7637a3-2a30-48b0-ba8a-974e6eff4e03" 00:11:28.746 ], 00:11:28.746 "product_name": "Malloc disk", 00:11:28.746 "block_size": 512, 00:11:28.746 "num_blocks": 65536, 00:11:28.746 "uuid": "6c7637a3-2a30-48b0-ba8a-974e6eff4e03", 00:11:28.746 "assigned_rate_limits": { 00:11:28.746 "rw_ios_per_sec": 0, 00:11:28.746 "rw_mbytes_per_sec": 0, 00:11:28.746 "r_mbytes_per_sec": 0, 00:11:28.746 "w_mbytes_per_sec": 0 00:11:28.746 }, 00:11:28.746 "claimed": true, 00:11:28.746 "claim_type": "exclusive_write", 00:11:28.746 "zoned": false, 00:11:28.746 "supported_io_types": { 00:11:28.746 "read": true, 00:11:28.746 "write": true, 00:11:28.746 "unmap": true, 00:11:28.746 "flush": true, 00:11:28.746 "reset": true, 00:11:28.746 "nvme_admin": false, 00:11:28.746 "nvme_io": false, 00:11:28.746 "nvme_io_md": false, 00:11:28.746 "write_zeroes": true, 00:11:28.746 "zcopy": true, 00:11:28.746 "get_zone_info": false, 00:11:28.746 "zone_management": false, 00:11:28.746 "zone_append": false, 00:11:28.746 "compare": false, 00:11:28.746 "compare_and_write": false, 00:11:28.746 "abort": true, 00:11:28.746 "seek_hole": false, 00:11:28.746 "seek_data": false, 00:11:28.746 "copy": true, 00:11:28.746 "nvme_iov_md": false 00:11:28.746 }, 00:11:28.746 "memory_domains": [ 00:11:28.746 { 00:11:28.746 "dma_device_id": "system", 00:11:28.746 "dma_device_type": 1 00:11:28.746 }, 00:11:28.746 { 00:11:28.746 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:28.746 "dma_device_type": 2 00:11:28.746 } 00:11:28.746 ], 00:11:28.746 "driver_specific": {} 00:11:28.746 } 00:11:28.746 ] 00:11:28.746 21:38:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.746 21:38:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:28.746 21:38:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:28.746 21:38:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:28.746 21:38:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:28.746 21:38:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:28.746 21:38:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:28.746 21:38:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:28.746 21:38:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:28.746 21:38:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:28.746 21:38:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:28.746 21:38:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:28.746 21:38:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:28.746 21:38:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:28.746 21:38:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.746 21:38:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.746 21:38:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.746 21:38:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:28.746 "name": "Existed_Raid", 00:11:28.746 "uuid": "6d46ab61-f1b1-4a75-bd7f-250fd078d690", 00:11:28.746 "strip_size_kb": 0, 00:11:28.746 "state": "configuring", 00:11:28.746 "raid_level": "raid1", 00:11:28.746 "superblock": true, 00:11:28.746 "num_base_bdevs": 3, 00:11:28.746 "num_base_bdevs_discovered": 1, 00:11:28.746 "num_base_bdevs_operational": 3, 00:11:28.746 "base_bdevs_list": [ 00:11:28.746 { 00:11:28.746 "name": "BaseBdev1", 00:11:28.746 "uuid": "6c7637a3-2a30-48b0-ba8a-974e6eff4e03", 00:11:28.746 "is_configured": true, 00:11:28.746 "data_offset": 2048, 00:11:28.746 "data_size": 63488 00:11:28.746 }, 00:11:28.746 { 00:11:28.746 "name": "BaseBdev2", 00:11:28.746 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:28.746 "is_configured": false, 00:11:28.746 "data_offset": 0, 00:11:28.746 "data_size": 0 00:11:28.746 }, 00:11:28.747 { 00:11:28.747 "name": "BaseBdev3", 00:11:28.747 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:28.747 "is_configured": false, 00:11:28.747 "data_offset": 0, 00:11:28.747 "data_size": 0 00:11:28.747 } 00:11:28.747 ] 00:11:28.747 }' 00:11:28.747 21:38:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:28.747 21:38:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.312 21:38:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:29.312 21:38:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.312 21:38:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.312 [2024-12-10 21:38:29.870101] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:29.312 [2024-12-10 21:38:29.870231] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:29.312 21:38:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.312 21:38:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:29.312 21:38:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.312 21:38:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.312 [2024-12-10 21:38:29.882192] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:29.312 [2024-12-10 21:38:29.884396] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:29.312 [2024-12-10 21:38:29.884535] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:29.312 [2024-12-10 21:38:29.884595] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:29.312 [2024-12-10 21:38:29.884625] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:29.312 21:38:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.312 21:38:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:29.312 21:38:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:29.312 21:38:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:29.312 21:38:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:29.312 21:38:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:29.312 21:38:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:29.312 21:38:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:29.312 21:38:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:29.312 21:38:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:29.312 21:38:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:29.312 21:38:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:29.312 21:38:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:29.312 21:38:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:29.312 21:38:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:29.312 21:38:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.312 21:38:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.312 21:38:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.312 21:38:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:29.312 "name": "Existed_Raid", 00:11:29.312 "uuid": "2469fd62-06c5-44ee-9615-dd45c489eab8", 00:11:29.312 "strip_size_kb": 0, 00:11:29.312 "state": "configuring", 00:11:29.312 "raid_level": "raid1", 00:11:29.312 "superblock": true, 00:11:29.312 "num_base_bdevs": 3, 00:11:29.312 "num_base_bdevs_discovered": 1, 00:11:29.312 "num_base_bdevs_operational": 3, 00:11:29.312 "base_bdevs_list": [ 00:11:29.312 { 00:11:29.312 "name": "BaseBdev1", 00:11:29.312 "uuid": "6c7637a3-2a30-48b0-ba8a-974e6eff4e03", 00:11:29.312 "is_configured": true, 00:11:29.312 "data_offset": 2048, 00:11:29.312 "data_size": 63488 00:11:29.312 }, 00:11:29.312 { 00:11:29.313 "name": "BaseBdev2", 00:11:29.313 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:29.313 "is_configured": false, 00:11:29.313 "data_offset": 0, 00:11:29.313 "data_size": 0 00:11:29.313 }, 00:11:29.313 { 00:11:29.313 "name": "BaseBdev3", 00:11:29.313 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:29.313 "is_configured": false, 00:11:29.313 "data_offset": 0, 00:11:29.313 "data_size": 0 00:11:29.313 } 00:11:29.313 ] 00:11:29.313 }' 00:11:29.313 21:38:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:29.313 21:38:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.571 21:38:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:29.571 21:38:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.571 21:38:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.571 [2024-12-10 21:38:30.343701] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:29.571 BaseBdev2 00:11:29.571 21:38:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.571 21:38:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:29.571 21:38:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:29.571 21:38:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:29.571 21:38:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:29.571 21:38:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:29.571 21:38:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:29.571 21:38:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:29.571 21:38:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.571 21:38:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.828 21:38:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.828 21:38:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:29.828 21:38:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.828 21:38:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.828 [ 00:11:29.828 { 00:11:29.828 "name": "BaseBdev2", 00:11:29.828 "aliases": [ 00:11:29.828 "f754a77a-7c81-4cf9-a263-bf7d84c60afe" 00:11:29.828 ], 00:11:29.828 "product_name": "Malloc disk", 00:11:29.828 "block_size": 512, 00:11:29.828 "num_blocks": 65536, 00:11:29.828 "uuid": "f754a77a-7c81-4cf9-a263-bf7d84c60afe", 00:11:29.828 "assigned_rate_limits": { 00:11:29.828 "rw_ios_per_sec": 0, 00:11:29.828 "rw_mbytes_per_sec": 0, 00:11:29.828 "r_mbytes_per_sec": 0, 00:11:29.828 "w_mbytes_per_sec": 0 00:11:29.828 }, 00:11:29.828 "claimed": true, 00:11:29.828 "claim_type": "exclusive_write", 00:11:29.828 "zoned": false, 00:11:29.828 "supported_io_types": { 00:11:29.828 "read": true, 00:11:29.828 "write": true, 00:11:29.828 "unmap": true, 00:11:29.828 "flush": true, 00:11:29.828 "reset": true, 00:11:29.829 "nvme_admin": false, 00:11:29.829 "nvme_io": false, 00:11:29.829 "nvme_io_md": false, 00:11:29.829 "write_zeroes": true, 00:11:29.829 "zcopy": true, 00:11:29.829 "get_zone_info": false, 00:11:29.829 "zone_management": false, 00:11:29.829 "zone_append": false, 00:11:29.829 "compare": false, 00:11:29.829 "compare_and_write": false, 00:11:29.829 "abort": true, 00:11:29.829 "seek_hole": false, 00:11:29.829 "seek_data": false, 00:11:29.829 "copy": true, 00:11:29.829 "nvme_iov_md": false 00:11:29.829 }, 00:11:29.829 "memory_domains": [ 00:11:29.829 { 00:11:29.829 "dma_device_id": "system", 00:11:29.829 "dma_device_type": 1 00:11:29.829 }, 00:11:29.829 { 00:11:29.829 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:29.829 "dma_device_type": 2 00:11:29.829 } 00:11:29.829 ], 00:11:29.829 "driver_specific": {} 00:11:29.829 } 00:11:29.829 ] 00:11:29.829 21:38:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.829 21:38:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:29.829 21:38:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:29.829 21:38:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:29.829 21:38:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:29.829 21:38:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:29.829 21:38:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:29.829 21:38:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:29.829 21:38:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:29.829 21:38:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:29.829 21:38:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:29.829 21:38:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:29.829 21:38:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:29.829 21:38:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:29.829 21:38:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:29.829 21:38:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:29.829 21:38:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.829 21:38:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.829 21:38:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.829 21:38:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:29.829 "name": "Existed_Raid", 00:11:29.829 "uuid": "2469fd62-06c5-44ee-9615-dd45c489eab8", 00:11:29.829 "strip_size_kb": 0, 00:11:29.829 "state": "configuring", 00:11:29.829 "raid_level": "raid1", 00:11:29.829 "superblock": true, 00:11:29.829 "num_base_bdevs": 3, 00:11:29.829 "num_base_bdevs_discovered": 2, 00:11:29.829 "num_base_bdevs_operational": 3, 00:11:29.829 "base_bdevs_list": [ 00:11:29.829 { 00:11:29.829 "name": "BaseBdev1", 00:11:29.829 "uuid": "6c7637a3-2a30-48b0-ba8a-974e6eff4e03", 00:11:29.829 "is_configured": true, 00:11:29.829 "data_offset": 2048, 00:11:29.829 "data_size": 63488 00:11:29.829 }, 00:11:29.829 { 00:11:29.829 "name": "BaseBdev2", 00:11:29.829 "uuid": "f754a77a-7c81-4cf9-a263-bf7d84c60afe", 00:11:29.829 "is_configured": true, 00:11:29.829 "data_offset": 2048, 00:11:29.829 "data_size": 63488 00:11:29.829 }, 00:11:29.829 { 00:11:29.829 "name": "BaseBdev3", 00:11:29.829 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:29.829 "is_configured": false, 00:11:29.829 "data_offset": 0, 00:11:29.829 "data_size": 0 00:11:29.829 } 00:11:29.829 ] 00:11:29.829 }' 00:11:29.829 21:38:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:29.829 21:38:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.085 21:38:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:30.085 21:38:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.085 21:38:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.085 [2024-12-10 21:38:30.833497] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:30.085 [2024-12-10 21:38:30.833778] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:30.085 [2024-12-10 21:38:30.833800] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:30.085 [2024-12-10 21:38:30.834081] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:11:30.085 [2024-12-10 21:38:30.834235] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:30.085 [2024-12-10 21:38:30.834244] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:30.085 [2024-12-10 21:38:30.834403] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:30.085 BaseBdev3 00:11:30.085 21:38:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.085 21:38:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:30.086 21:38:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:30.086 21:38:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:30.086 21:38:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:30.086 21:38:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:30.086 21:38:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:30.086 21:38:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:30.086 21:38:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.086 21:38:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.086 21:38:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.086 21:38:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:30.086 21:38:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.086 21:38:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.086 [ 00:11:30.086 { 00:11:30.086 "name": "BaseBdev3", 00:11:30.086 "aliases": [ 00:11:30.086 "6ca99a91-1d14-4a42-a88a-bcc8c2b45ed7" 00:11:30.086 ], 00:11:30.086 "product_name": "Malloc disk", 00:11:30.086 "block_size": 512, 00:11:30.086 "num_blocks": 65536, 00:11:30.086 "uuid": "6ca99a91-1d14-4a42-a88a-bcc8c2b45ed7", 00:11:30.086 "assigned_rate_limits": { 00:11:30.086 "rw_ios_per_sec": 0, 00:11:30.086 "rw_mbytes_per_sec": 0, 00:11:30.086 "r_mbytes_per_sec": 0, 00:11:30.086 "w_mbytes_per_sec": 0 00:11:30.086 }, 00:11:30.086 "claimed": true, 00:11:30.086 "claim_type": "exclusive_write", 00:11:30.086 "zoned": false, 00:11:30.086 "supported_io_types": { 00:11:30.086 "read": true, 00:11:30.086 "write": true, 00:11:30.086 "unmap": true, 00:11:30.086 "flush": true, 00:11:30.086 "reset": true, 00:11:30.086 "nvme_admin": false, 00:11:30.086 "nvme_io": false, 00:11:30.086 "nvme_io_md": false, 00:11:30.086 "write_zeroes": true, 00:11:30.086 "zcopy": true, 00:11:30.086 "get_zone_info": false, 00:11:30.086 "zone_management": false, 00:11:30.086 "zone_append": false, 00:11:30.343 "compare": false, 00:11:30.343 "compare_and_write": false, 00:11:30.343 "abort": true, 00:11:30.343 "seek_hole": false, 00:11:30.343 "seek_data": false, 00:11:30.343 "copy": true, 00:11:30.343 "nvme_iov_md": false 00:11:30.343 }, 00:11:30.343 "memory_domains": [ 00:11:30.343 { 00:11:30.343 "dma_device_id": "system", 00:11:30.343 "dma_device_type": 1 00:11:30.343 }, 00:11:30.343 { 00:11:30.343 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:30.343 "dma_device_type": 2 00:11:30.343 } 00:11:30.343 ], 00:11:30.343 "driver_specific": {} 00:11:30.343 } 00:11:30.343 ] 00:11:30.343 21:38:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.343 21:38:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:30.343 21:38:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:30.343 21:38:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:30.343 21:38:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:11:30.343 21:38:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:30.343 21:38:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:30.343 21:38:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:30.343 21:38:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:30.343 21:38:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:30.343 21:38:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:30.343 21:38:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:30.343 21:38:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:30.343 21:38:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:30.343 21:38:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:30.343 21:38:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:30.343 21:38:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.343 21:38:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.343 21:38:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.343 21:38:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:30.343 "name": "Existed_Raid", 00:11:30.343 "uuid": "2469fd62-06c5-44ee-9615-dd45c489eab8", 00:11:30.343 "strip_size_kb": 0, 00:11:30.343 "state": "online", 00:11:30.343 "raid_level": "raid1", 00:11:30.343 "superblock": true, 00:11:30.343 "num_base_bdevs": 3, 00:11:30.343 "num_base_bdevs_discovered": 3, 00:11:30.343 "num_base_bdevs_operational": 3, 00:11:30.343 "base_bdevs_list": [ 00:11:30.343 { 00:11:30.343 "name": "BaseBdev1", 00:11:30.343 "uuid": "6c7637a3-2a30-48b0-ba8a-974e6eff4e03", 00:11:30.343 "is_configured": true, 00:11:30.343 "data_offset": 2048, 00:11:30.343 "data_size": 63488 00:11:30.343 }, 00:11:30.343 { 00:11:30.343 "name": "BaseBdev2", 00:11:30.343 "uuid": "f754a77a-7c81-4cf9-a263-bf7d84c60afe", 00:11:30.343 "is_configured": true, 00:11:30.343 "data_offset": 2048, 00:11:30.343 "data_size": 63488 00:11:30.343 }, 00:11:30.343 { 00:11:30.343 "name": "BaseBdev3", 00:11:30.343 "uuid": "6ca99a91-1d14-4a42-a88a-bcc8c2b45ed7", 00:11:30.343 "is_configured": true, 00:11:30.343 "data_offset": 2048, 00:11:30.343 "data_size": 63488 00:11:30.343 } 00:11:30.343 ] 00:11:30.343 }' 00:11:30.343 21:38:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:30.343 21:38:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.600 21:38:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:30.600 21:38:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:30.600 21:38:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:30.600 21:38:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:30.600 21:38:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:30.600 21:38:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:30.600 21:38:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:30.600 21:38:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:30.600 21:38:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.600 21:38:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.600 [2024-12-10 21:38:31.357016] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:30.600 21:38:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.600 21:38:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:30.600 "name": "Existed_Raid", 00:11:30.600 "aliases": [ 00:11:30.600 "2469fd62-06c5-44ee-9615-dd45c489eab8" 00:11:30.600 ], 00:11:30.600 "product_name": "Raid Volume", 00:11:30.600 "block_size": 512, 00:11:30.600 "num_blocks": 63488, 00:11:30.600 "uuid": "2469fd62-06c5-44ee-9615-dd45c489eab8", 00:11:30.600 "assigned_rate_limits": { 00:11:30.600 "rw_ios_per_sec": 0, 00:11:30.600 "rw_mbytes_per_sec": 0, 00:11:30.600 "r_mbytes_per_sec": 0, 00:11:30.600 "w_mbytes_per_sec": 0 00:11:30.600 }, 00:11:30.600 "claimed": false, 00:11:30.600 "zoned": false, 00:11:30.600 "supported_io_types": { 00:11:30.600 "read": true, 00:11:30.600 "write": true, 00:11:30.600 "unmap": false, 00:11:30.600 "flush": false, 00:11:30.600 "reset": true, 00:11:30.600 "nvme_admin": false, 00:11:30.600 "nvme_io": false, 00:11:30.600 "nvme_io_md": false, 00:11:30.600 "write_zeroes": true, 00:11:30.600 "zcopy": false, 00:11:30.600 "get_zone_info": false, 00:11:30.600 "zone_management": false, 00:11:30.600 "zone_append": false, 00:11:30.600 "compare": false, 00:11:30.600 "compare_and_write": false, 00:11:30.600 "abort": false, 00:11:30.600 "seek_hole": false, 00:11:30.600 "seek_data": false, 00:11:30.600 "copy": false, 00:11:30.600 "nvme_iov_md": false 00:11:30.600 }, 00:11:30.600 "memory_domains": [ 00:11:30.600 { 00:11:30.600 "dma_device_id": "system", 00:11:30.600 "dma_device_type": 1 00:11:30.600 }, 00:11:30.600 { 00:11:30.600 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:30.600 "dma_device_type": 2 00:11:30.600 }, 00:11:30.600 { 00:11:30.600 "dma_device_id": "system", 00:11:30.600 "dma_device_type": 1 00:11:30.600 }, 00:11:30.600 { 00:11:30.600 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:30.600 "dma_device_type": 2 00:11:30.600 }, 00:11:30.600 { 00:11:30.600 "dma_device_id": "system", 00:11:30.600 "dma_device_type": 1 00:11:30.600 }, 00:11:30.600 { 00:11:30.600 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:30.600 "dma_device_type": 2 00:11:30.600 } 00:11:30.600 ], 00:11:30.600 "driver_specific": { 00:11:30.600 "raid": { 00:11:30.600 "uuid": "2469fd62-06c5-44ee-9615-dd45c489eab8", 00:11:30.600 "strip_size_kb": 0, 00:11:30.600 "state": "online", 00:11:30.600 "raid_level": "raid1", 00:11:30.600 "superblock": true, 00:11:30.600 "num_base_bdevs": 3, 00:11:30.600 "num_base_bdevs_discovered": 3, 00:11:30.600 "num_base_bdevs_operational": 3, 00:11:30.600 "base_bdevs_list": [ 00:11:30.600 { 00:11:30.600 "name": "BaseBdev1", 00:11:30.600 "uuid": "6c7637a3-2a30-48b0-ba8a-974e6eff4e03", 00:11:30.600 "is_configured": true, 00:11:30.600 "data_offset": 2048, 00:11:30.600 "data_size": 63488 00:11:30.600 }, 00:11:30.600 { 00:11:30.600 "name": "BaseBdev2", 00:11:30.600 "uuid": "f754a77a-7c81-4cf9-a263-bf7d84c60afe", 00:11:30.600 "is_configured": true, 00:11:30.600 "data_offset": 2048, 00:11:30.600 "data_size": 63488 00:11:30.600 }, 00:11:30.600 { 00:11:30.600 "name": "BaseBdev3", 00:11:30.600 "uuid": "6ca99a91-1d14-4a42-a88a-bcc8c2b45ed7", 00:11:30.600 "is_configured": true, 00:11:30.600 "data_offset": 2048, 00:11:30.600 "data_size": 63488 00:11:30.600 } 00:11:30.600 ] 00:11:30.600 } 00:11:30.600 } 00:11:30.600 }' 00:11:30.858 21:38:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:30.858 21:38:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:30.858 BaseBdev2 00:11:30.858 BaseBdev3' 00:11:30.858 21:38:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:30.858 21:38:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:30.858 21:38:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:30.858 21:38:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:30.858 21:38:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:30.858 21:38:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.858 21:38:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.858 21:38:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.858 21:38:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:30.858 21:38:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:30.858 21:38:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:30.858 21:38:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:30.858 21:38:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.858 21:38:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.858 21:38:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:30.858 21:38:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.858 21:38:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:30.858 21:38:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:30.858 21:38:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:30.858 21:38:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:30.858 21:38:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:30.858 21:38:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.858 21:38:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.858 21:38:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.858 21:38:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:30.858 21:38:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:30.858 21:38:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:30.858 21:38:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.858 21:38:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.858 [2024-12-10 21:38:31.604362] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:31.117 21:38:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.117 21:38:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:31.117 21:38:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:11:31.117 21:38:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:31.117 21:38:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:11:31.117 21:38:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:11:31.117 21:38:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:11:31.117 21:38:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:31.117 21:38:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:31.117 21:38:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:31.117 21:38:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:31.117 21:38:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:31.117 21:38:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:31.117 21:38:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:31.117 21:38:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:31.117 21:38:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:31.117 21:38:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:31.117 21:38:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.117 21:38:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.118 21:38:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:31.118 21:38:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.118 21:38:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:31.118 "name": "Existed_Raid", 00:11:31.118 "uuid": "2469fd62-06c5-44ee-9615-dd45c489eab8", 00:11:31.118 "strip_size_kb": 0, 00:11:31.118 "state": "online", 00:11:31.118 "raid_level": "raid1", 00:11:31.118 "superblock": true, 00:11:31.118 "num_base_bdevs": 3, 00:11:31.118 "num_base_bdevs_discovered": 2, 00:11:31.118 "num_base_bdevs_operational": 2, 00:11:31.118 "base_bdevs_list": [ 00:11:31.118 { 00:11:31.118 "name": null, 00:11:31.118 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:31.118 "is_configured": false, 00:11:31.118 "data_offset": 0, 00:11:31.118 "data_size": 63488 00:11:31.118 }, 00:11:31.118 { 00:11:31.118 "name": "BaseBdev2", 00:11:31.118 "uuid": "f754a77a-7c81-4cf9-a263-bf7d84c60afe", 00:11:31.118 "is_configured": true, 00:11:31.118 "data_offset": 2048, 00:11:31.118 "data_size": 63488 00:11:31.118 }, 00:11:31.118 { 00:11:31.118 "name": "BaseBdev3", 00:11:31.118 "uuid": "6ca99a91-1d14-4a42-a88a-bcc8c2b45ed7", 00:11:31.118 "is_configured": true, 00:11:31.118 "data_offset": 2048, 00:11:31.118 "data_size": 63488 00:11:31.118 } 00:11:31.118 ] 00:11:31.118 }' 00:11:31.118 21:38:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:31.118 21:38:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.447 21:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:31.447 21:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:31.448 21:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:31.448 21:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:31.448 21:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.448 21:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.448 21:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.448 21:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:31.448 21:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:31.448 21:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:31.448 21:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.448 21:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.448 [2024-12-10 21:38:32.168838] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:31.706 21:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.706 21:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:31.706 21:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:31.706 21:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:31.706 21:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.706 21:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.706 21:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:31.706 21:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.706 21:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:31.706 21:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:31.706 21:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:31.706 21:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.706 21:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.706 [2024-12-10 21:38:32.332328] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:31.706 [2024-12-10 21:38:32.332521] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:31.706 [2024-12-10 21:38:32.447054] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:31.706 [2024-12-10 21:38:32.447120] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:31.706 [2024-12-10 21:38:32.447133] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:31.706 21:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.706 21:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:31.706 21:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:31.706 21:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:31.706 21:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.706 21:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.706 21:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:31.706 21:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.965 21:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:31.965 21:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:31.965 21:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:11:31.965 21:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:31.965 21:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:31.965 21:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:31.965 21:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.965 21:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.965 BaseBdev2 00:11:31.965 21:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.965 21:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:31.965 21:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:31.965 21:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:31.965 21:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:31.965 21:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:31.965 21:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:31.965 21:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:31.965 21:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.965 21:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.965 21:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.965 21:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:31.965 21:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.965 21:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.965 [ 00:11:31.965 { 00:11:31.965 "name": "BaseBdev2", 00:11:31.965 "aliases": [ 00:11:31.965 "70e29a04-9b6f-4c0c-8f4b-cf9025a00b3a" 00:11:31.965 ], 00:11:31.965 "product_name": "Malloc disk", 00:11:31.965 "block_size": 512, 00:11:31.965 "num_blocks": 65536, 00:11:31.965 "uuid": "70e29a04-9b6f-4c0c-8f4b-cf9025a00b3a", 00:11:31.965 "assigned_rate_limits": { 00:11:31.965 "rw_ios_per_sec": 0, 00:11:31.965 "rw_mbytes_per_sec": 0, 00:11:31.965 "r_mbytes_per_sec": 0, 00:11:31.965 "w_mbytes_per_sec": 0 00:11:31.965 }, 00:11:31.965 "claimed": false, 00:11:31.965 "zoned": false, 00:11:31.965 "supported_io_types": { 00:11:31.965 "read": true, 00:11:31.965 "write": true, 00:11:31.965 "unmap": true, 00:11:31.965 "flush": true, 00:11:31.965 "reset": true, 00:11:31.965 "nvme_admin": false, 00:11:31.965 "nvme_io": false, 00:11:31.965 "nvme_io_md": false, 00:11:31.965 "write_zeroes": true, 00:11:31.965 "zcopy": true, 00:11:31.965 "get_zone_info": false, 00:11:31.965 "zone_management": false, 00:11:31.965 "zone_append": false, 00:11:31.965 "compare": false, 00:11:31.965 "compare_and_write": false, 00:11:31.965 "abort": true, 00:11:31.965 "seek_hole": false, 00:11:31.965 "seek_data": false, 00:11:31.965 "copy": true, 00:11:31.965 "nvme_iov_md": false 00:11:31.965 }, 00:11:31.965 "memory_domains": [ 00:11:31.965 { 00:11:31.965 "dma_device_id": "system", 00:11:31.965 "dma_device_type": 1 00:11:31.965 }, 00:11:31.965 { 00:11:31.965 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:31.965 "dma_device_type": 2 00:11:31.965 } 00:11:31.965 ], 00:11:31.965 "driver_specific": {} 00:11:31.965 } 00:11:31.965 ] 00:11:31.965 21:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.965 21:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:31.965 21:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:31.965 21:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:31.965 21:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:31.965 21:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.965 21:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.965 BaseBdev3 00:11:31.965 21:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.965 21:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:31.965 21:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:31.965 21:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:31.965 21:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:31.965 21:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:31.965 21:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:31.965 21:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:31.965 21:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.965 21:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.965 21:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.965 21:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:31.966 21:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.966 21:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.966 [ 00:11:31.966 { 00:11:31.966 "name": "BaseBdev3", 00:11:31.966 "aliases": [ 00:11:31.966 "f47e5cc1-5b38-4059-aa51-006034961f60" 00:11:31.966 ], 00:11:31.966 "product_name": "Malloc disk", 00:11:31.966 "block_size": 512, 00:11:31.966 "num_blocks": 65536, 00:11:31.966 "uuid": "f47e5cc1-5b38-4059-aa51-006034961f60", 00:11:31.966 "assigned_rate_limits": { 00:11:31.966 "rw_ios_per_sec": 0, 00:11:31.966 "rw_mbytes_per_sec": 0, 00:11:31.966 "r_mbytes_per_sec": 0, 00:11:31.966 "w_mbytes_per_sec": 0 00:11:31.966 }, 00:11:31.966 "claimed": false, 00:11:31.966 "zoned": false, 00:11:31.966 "supported_io_types": { 00:11:31.966 "read": true, 00:11:31.966 "write": true, 00:11:31.966 "unmap": true, 00:11:31.966 "flush": true, 00:11:31.966 "reset": true, 00:11:31.966 "nvme_admin": false, 00:11:31.966 "nvme_io": false, 00:11:31.966 "nvme_io_md": false, 00:11:31.966 "write_zeroes": true, 00:11:31.966 "zcopy": true, 00:11:31.966 "get_zone_info": false, 00:11:31.966 "zone_management": false, 00:11:31.966 "zone_append": false, 00:11:31.966 "compare": false, 00:11:31.966 "compare_and_write": false, 00:11:31.966 "abort": true, 00:11:31.966 "seek_hole": false, 00:11:31.966 "seek_data": false, 00:11:31.966 "copy": true, 00:11:31.966 "nvme_iov_md": false 00:11:31.966 }, 00:11:31.966 "memory_domains": [ 00:11:31.966 { 00:11:31.966 "dma_device_id": "system", 00:11:31.966 "dma_device_type": 1 00:11:31.966 }, 00:11:31.966 { 00:11:31.966 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:31.966 "dma_device_type": 2 00:11:31.966 } 00:11:31.966 ], 00:11:31.966 "driver_specific": {} 00:11:31.966 } 00:11:31.966 ] 00:11:31.966 21:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.966 21:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:31.966 21:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:31.966 21:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:31.966 21:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:31.966 21:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.966 21:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.966 [2024-12-10 21:38:32.652992] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:31.966 [2024-12-10 21:38:32.653092] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:31.966 [2024-12-10 21:38:32.653139] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:31.966 [2024-12-10 21:38:32.654986] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:31.966 21:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.966 21:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:31.966 21:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:31.966 21:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:31.966 21:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:31.966 21:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:31.966 21:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:31.966 21:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:31.966 21:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:31.966 21:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:31.966 21:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:31.966 21:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:31.966 21:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:31.966 21:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.966 21:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.966 21:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.966 21:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:31.966 "name": "Existed_Raid", 00:11:31.966 "uuid": "8eebf933-3448-4fb9-ae88-66fd8d2ba1a5", 00:11:31.966 "strip_size_kb": 0, 00:11:31.966 "state": "configuring", 00:11:31.966 "raid_level": "raid1", 00:11:31.966 "superblock": true, 00:11:31.966 "num_base_bdevs": 3, 00:11:31.966 "num_base_bdevs_discovered": 2, 00:11:31.966 "num_base_bdevs_operational": 3, 00:11:31.966 "base_bdevs_list": [ 00:11:31.966 { 00:11:31.966 "name": "BaseBdev1", 00:11:31.966 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:31.966 "is_configured": false, 00:11:31.966 "data_offset": 0, 00:11:31.966 "data_size": 0 00:11:31.966 }, 00:11:31.966 { 00:11:31.966 "name": "BaseBdev2", 00:11:31.966 "uuid": "70e29a04-9b6f-4c0c-8f4b-cf9025a00b3a", 00:11:31.966 "is_configured": true, 00:11:31.966 "data_offset": 2048, 00:11:31.966 "data_size": 63488 00:11:31.966 }, 00:11:31.966 { 00:11:31.966 "name": "BaseBdev3", 00:11:31.966 "uuid": "f47e5cc1-5b38-4059-aa51-006034961f60", 00:11:31.966 "is_configured": true, 00:11:31.966 "data_offset": 2048, 00:11:31.966 "data_size": 63488 00:11:31.966 } 00:11:31.966 ] 00:11:31.966 }' 00:11:31.966 21:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:31.966 21:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.531 21:38:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:32.531 21:38:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.531 21:38:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.531 [2024-12-10 21:38:33.104267] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:32.531 21:38:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.531 21:38:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:32.531 21:38:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:32.531 21:38:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:32.531 21:38:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:32.531 21:38:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:32.531 21:38:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:32.531 21:38:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:32.531 21:38:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:32.531 21:38:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:32.531 21:38:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:32.531 21:38:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:32.531 21:38:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.531 21:38:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.531 21:38:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:32.531 21:38:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.531 21:38:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:32.531 "name": "Existed_Raid", 00:11:32.531 "uuid": "8eebf933-3448-4fb9-ae88-66fd8d2ba1a5", 00:11:32.531 "strip_size_kb": 0, 00:11:32.531 "state": "configuring", 00:11:32.531 "raid_level": "raid1", 00:11:32.531 "superblock": true, 00:11:32.532 "num_base_bdevs": 3, 00:11:32.532 "num_base_bdevs_discovered": 1, 00:11:32.532 "num_base_bdevs_operational": 3, 00:11:32.532 "base_bdevs_list": [ 00:11:32.532 { 00:11:32.532 "name": "BaseBdev1", 00:11:32.532 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:32.532 "is_configured": false, 00:11:32.532 "data_offset": 0, 00:11:32.532 "data_size": 0 00:11:32.532 }, 00:11:32.532 { 00:11:32.532 "name": null, 00:11:32.532 "uuid": "70e29a04-9b6f-4c0c-8f4b-cf9025a00b3a", 00:11:32.532 "is_configured": false, 00:11:32.532 "data_offset": 0, 00:11:32.532 "data_size": 63488 00:11:32.532 }, 00:11:32.532 { 00:11:32.532 "name": "BaseBdev3", 00:11:32.532 "uuid": "f47e5cc1-5b38-4059-aa51-006034961f60", 00:11:32.532 "is_configured": true, 00:11:32.532 "data_offset": 2048, 00:11:32.532 "data_size": 63488 00:11:32.532 } 00:11:32.532 ] 00:11:32.532 }' 00:11:32.532 21:38:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:32.532 21:38:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.790 21:38:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:32.790 21:38:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.790 21:38:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.790 21:38:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:32.790 21:38:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.048 21:38:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:33.048 21:38:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:33.048 21:38:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.048 21:38:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:33.048 [2024-12-10 21:38:33.641846] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:33.048 BaseBdev1 00:11:33.048 21:38:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.048 21:38:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:33.048 21:38:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:33.048 21:38:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:33.048 21:38:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:33.048 21:38:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:33.048 21:38:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:33.048 21:38:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:33.048 21:38:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.048 21:38:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:33.048 21:38:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.048 21:38:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:33.048 21:38:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.048 21:38:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:33.048 [ 00:11:33.048 { 00:11:33.048 "name": "BaseBdev1", 00:11:33.048 "aliases": [ 00:11:33.048 "3c195c33-7dfa-43e5-b107-591d383b3d94" 00:11:33.049 ], 00:11:33.049 "product_name": "Malloc disk", 00:11:33.049 "block_size": 512, 00:11:33.049 "num_blocks": 65536, 00:11:33.049 "uuid": "3c195c33-7dfa-43e5-b107-591d383b3d94", 00:11:33.049 "assigned_rate_limits": { 00:11:33.049 "rw_ios_per_sec": 0, 00:11:33.049 "rw_mbytes_per_sec": 0, 00:11:33.049 "r_mbytes_per_sec": 0, 00:11:33.049 "w_mbytes_per_sec": 0 00:11:33.049 }, 00:11:33.049 "claimed": true, 00:11:33.049 "claim_type": "exclusive_write", 00:11:33.049 "zoned": false, 00:11:33.049 "supported_io_types": { 00:11:33.049 "read": true, 00:11:33.049 "write": true, 00:11:33.049 "unmap": true, 00:11:33.049 "flush": true, 00:11:33.049 "reset": true, 00:11:33.049 "nvme_admin": false, 00:11:33.049 "nvme_io": false, 00:11:33.049 "nvme_io_md": false, 00:11:33.049 "write_zeroes": true, 00:11:33.049 "zcopy": true, 00:11:33.049 "get_zone_info": false, 00:11:33.049 "zone_management": false, 00:11:33.049 "zone_append": false, 00:11:33.049 "compare": false, 00:11:33.049 "compare_and_write": false, 00:11:33.049 "abort": true, 00:11:33.049 "seek_hole": false, 00:11:33.049 "seek_data": false, 00:11:33.049 "copy": true, 00:11:33.049 "nvme_iov_md": false 00:11:33.049 }, 00:11:33.049 "memory_domains": [ 00:11:33.049 { 00:11:33.049 "dma_device_id": "system", 00:11:33.049 "dma_device_type": 1 00:11:33.049 }, 00:11:33.049 { 00:11:33.049 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:33.049 "dma_device_type": 2 00:11:33.049 } 00:11:33.049 ], 00:11:33.049 "driver_specific": {} 00:11:33.049 } 00:11:33.049 ] 00:11:33.049 21:38:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.049 21:38:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:33.049 21:38:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:33.049 21:38:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:33.049 21:38:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:33.049 21:38:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:33.049 21:38:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:33.049 21:38:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:33.049 21:38:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:33.049 21:38:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:33.049 21:38:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:33.049 21:38:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:33.049 21:38:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:33.049 21:38:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:33.049 21:38:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.049 21:38:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:33.049 21:38:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.049 21:38:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:33.049 "name": "Existed_Raid", 00:11:33.049 "uuid": "8eebf933-3448-4fb9-ae88-66fd8d2ba1a5", 00:11:33.049 "strip_size_kb": 0, 00:11:33.049 "state": "configuring", 00:11:33.049 "raid_level": "raid1", 00:11:33.049 "superblock": true, 00:11:33.049 "num_base_bdevs": 3, 00:11:33.049 "num_base_bdevs_discovered": 2, 00:11:33.049 "num_base_bdevs_operational": 3, 00:11:33.049 "base_bdevs_list": [ 00:11:33.049 { 00:11:33.049 "name": "BaseBdev1", 00:11:33.049 "uuid": "3c195c33-7dfa-43e5-b107-591d383b3d94", 00:11:33.049 "is_configured": true, 00:11:33.049 "data_offset": 2048, 00:11:33.049 "data_size": 63488 00:11:33.049 }, 00:11:33.049 { 00:11:33.049 "name": null, 00:11:33.049 "uuid": "70e29a04-9b6f-4c0c-8f4b-cf9025a00b3a", 00:11:33.049 "is_configured": false, 00:11:33.049 "data_offset": 0, 00:11:33.049 "data_size": 63488 00:11:33.049 }, 00:11:33.049 { 00:11:33.049 "name": "BaseBdev3", 00:11:33.049 "uuid": "f47e5cc1-5b38-4059-aa51-006034961f60", 00:11:33.049 "is_configured": true, 00:11:33.049 "data_offset": 2048, 00:11:33.049 "data_size": 63488 00:11:33.049 } 00:11:33.049 ] 00:11:33.049 }' 00:11:33.049 21:38:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:33.049 21:38:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:33.616 21:38:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:33.616 21:38:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:33.616 21:38:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.616 21:38:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:33.616 21:38:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.616 21:38:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:33.616 21:38:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:33.616 21:38:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.616 21:38:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:33.616 [2024-12-10 21:38:34.141055] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:33.616 21:38:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.616 21:38:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:33.616 21:38:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:33.616 21:38:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:33.616 21:38:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:33.616 21:38:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:33.616 21:38:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:33.616 21:38:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:33.616 21:38:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:33.616 21:38:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:33.616 21:38:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:33.616 21:38:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:33.616 21:38:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:33.616 21:38:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.616 21:38:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:33.616 21:38:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.616 21:38:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:33.616 "name": "Existed_Raid", 00:11:33.616 "uuid": "8eebf933-3448-4fb9-ae88-66fd8d2ba1a5", 00:11:33.616 "strip_size_kb": 0, 00:11:33.616 "state": "configuring", 00:11:33.616 "raid_level": "raid1", 00:11:33.616 "superblock": true, 00:11:33.616 "num_base_bdevs": 3, 00:11:33.616 "num_base_bdevs_discovered": 1, 00:11:33.616 "num_base_bdevs_operational": 3, 00:11:33.616 "base_bdevs_list": [ 00:11:33.616 { 00:11:33.616 "name": "BaseBdev1", 00:11:33.616 "uuid": "3c195c33-7dfa-43e5-b107-591d383b3d94", 00:11:33.616 "is_configured": true, 00:11:33.616 "data_offset": 2048, 00:11:33.616 "data_size": 63488 00:11:33.616 }, 00:11:33.616 { 00:11:33.616 "name": null, 00:11:33.616 "uuid": "70e29a04-9b6f-4c0c-8f4b-cf9025a00b3a", 00:11:33.616 "is_configured": false, 00:11:33.616 "data_offset": 0, 00:11:33.616 "data_size": 63488 00:11:33.616 }, 00:11:33.616 { 00:11:33.616 "name": null, 00:11:33.616 "uuid": "f47e5cc1-5b38-4059-aa51-006034961f60", 00:11:33.616 "is_configured": false, 00:11:33.616 "data_offset": 0, 00:11:33.616 "data_size": 63488 00:11:33.616 } 00:11:33.616 ] 00:11:33.616 }' 00:11:33.616 21:38:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:33.616 21:38:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:33.874 21:38:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:33.874 21:38:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:33.874 21:38:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.874 21:38:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:33.874 21:38:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.874 21:38:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:33.874 21:38:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:33.874 21:38:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.874 21:38:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:33.874 [2024-12-10 21:38:34.652258] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:34.133 21:38:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.133 21:38:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:34.133 21:38:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:34.133 21:38:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:34.133 21:38:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:34.133 21:38:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:34.133 21:38:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:34.133 21:38:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:34.133 21:38:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:34.133 21:38:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:34.133 21:38:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:34.133 21:38:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:34.133 21:38:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:34.133 21:38:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.133 21:38:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.133 21:38:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.133 21:38:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:34.133 "name": "Existed_Raid", 00:11:34.133 "uuid": "8eebf933-3448-4fb9-ae88-66fd8d2ba1a5", 00:11:34.133 "strip_size_kb": 0, 00:11:34.133 "state": "configuring", 00:11:34.133 "raid_level": "raid1", 00:11:34.133 "superblock": true, 00:11:34.133 "num_base_bdevs": 3, 00:11:34.133 "num_base_bdevs_discovered": 2, 00:11:34.133 "num_base_bdevs_operational": 3, 00:11:34.133 "base_bdevs_list": [ 00:11:34.133 { 00:11:34.133 "name": "BaseBdev1", 00:11:34.133 "uuid": "3c195c33-7dfa-43e5-b107-591d383b3d94", 00:11:34.133 "is_configured": true, 00:11:34.133 "data_offset": 2048, 00:11:34.133 "data_size": 63488 00:11:34.133 }, 00:11:34.133 { 00:11:34.133 "name": null, 00:11:34.133 "uuid": "70e29a04-9b6f-4c0c-8f4b-cf9025a00b3a", 00:11:34.133 "is_configured": false, 00:11:34.133 "data_offset": 0, 00:11:34.133 "data_size": 63488 00:11:34.133 }, 00:11:34.133 { 00:11:34.133 "name": "BaseBdev3", 00:11:34.133 "uuid": "f47e5cc1-5b38-4059-aa51-006034961f60", 00:11:34.133 "is_configured": true, 00:11:34.133 "data_offset": 2048, 00:11:34.133 "data_size": 63488 00:11:34.133 } 00:11:34.133 ] 00:11:34.133 }' 00:11:34.133 21:38:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:34.133 21:38:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.392 21:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:34.392 21:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:34.392 21:38:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.392 21:38:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.392 21:38:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.392 21:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:34.392 21:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:34.392 21:38:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.392 21:38:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.392 [2024-12-10 21:38:35.147468] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:34.653 21:38:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.653 21:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:34.653 21:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:34.653 21:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:34.653 21:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:34.653 21:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:34.653 21:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:34.653 21:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:34.653 21:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:34.653 21:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:34.653 21:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:34.653 21:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:34.653 21:38:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.653 21:38:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.653 21:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:34.653 21:38:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.653 21:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:34.653 "name": "Existed_Raid", 00:11:34.653 "uuid": "8eebf933-3448-4fb9-ae88-66fd8d2ba1a5", 00:11:34.653 "strip_size_kb": 0, 00:11:34.653 "state": "configuring", 00:11:34.653 "raid_level": "raid1", 00:11:34.653 "superblock": true, 00:11:34.653 "num_base_bdevs": 3, 00:11:34.653 "num_base_bdevs_discovered": 1, 00:11:34.653 "num_base_bdevs_operational": 3, 00:11:34.653 "base_bdevs_list": [ 00:11:34.653 { 00:11:34.653 "name": null, 00:11:34.653 "uuid": "3c195c33-7dfa-43e5-b107-591d383b3d94", 00:11:34.653 "is_configured": false, 00:11:34.653 "data_offset": 0, 00:11:34.653 "data_size": 63488 00:11:34.653 }, 00:11:34.653 { 00:11:34.653 "name": null, 00:11:34.653 "uuid": "70e29a04-9b6f-4c0c-8f4b-cf9025a00b3a", 00:11:34.653 "is_configured": false, 00:11:34.653 "data_offset": 0, 00:11:34.653 "data_size": 63488 00:11:34.653 }, 00:11:34.653 { 00:11:34.653 "name": "BaseBdev3", 00:11:34.653 "uuid": "f47e5cc1-5b38-4059-aa51-006034961f60", 00:11:34.653 "is_configured": true, 00:11:34.653 "data_offset": 2048, 00:11:34.653 "data_size": 63488 00:11:34.653 } 00:11:34.653 ] 00:11:34.653 }' 00:11:34.653 21:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:34.653 21:38:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.220 21:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:35.220 21:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:35.220 21:38:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.220 21:38:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.220 21:38:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.220 21:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:35.220 21:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:35.220 21:38:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.220 21:38:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.220 [2024-12-10 21:38:35.769089] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:35.220 21:38:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.220 21:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:35.220 21:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:35.220 21:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:35.220 21:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:35.220 21:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:35.220 21:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:35.220 21:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:35.220 21:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:35.220 21:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:35.220 21:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:35.220 21:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:35.220 21:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:35.220 21:38:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.220 21:38:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.220 21:38:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.220 21:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:35.220 "name": "Existed_Raid", 00:11:35.220 "uuid": "8eebf933-3448-4fb9-ae88-66fd8d2ba1a5", 00:11:35.220 "strip_size_kb": 0, 00:11:35.220 "state": "configuring", 00:11:35.220 "raid_level": "raid1", 00:11:35.220 "superblock": true, 00:11:35.220 "num_base_bdevs": 3, 00:11:35.220 "num_base_bdevs_discovered": 2, 00:11:35.220 "num_base_bdevs_operational": 3, 00:11:35.220 "base_bdevs_list": [ 00:11:35.220 { 00:11:35.220 "name": null, 00:11:35.220 "uuid": "3c195c33-7dfa-43e5-b107-591d383b3d94", 00:11:35.220 "is_configured": false, 00:11:35.220 "data_offset": 0, 00:11:35.220 "data_size": 63488 00:11:35.220 }, 00:11:35.220 { 00:11:35.220 "name": "BaseBdev2", 00:11:35.220 "uuid": "70e29a04-9b6f-4c0c-8f4b-cf9025a00b3a", 00:11:35.220 "is_configured": true, 00:11:35.220 "data_offset": 2048, 00:11:35.220 "data_size": 63488 00:11:35.220 }, 00:11:35.220 { 00:11:35.220 "name": "BaseBdev3", 00:11:35.220 "uuid": "f47e5cc1-5b38-4059-aa51-006034961f60", 00:11:35.220 "is_configured": true, 00:11:35.220 "data_offset": 2048, 00:11:35.220 "data_size": 63488 00:11:35.220 } 00:11:35.220 ] 00:11:35.220 }' 00:11:35.220 21:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:35.220 21:38:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.480 21:38:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:35.480 21:38:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:35.480 21:38:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.480 21:38:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.480 21:38:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.738 21:38:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:35.738 21:38:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:35.738 21:38:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.738 21:38:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.738 21:38:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:35.738 21:38:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.738 21:38:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 3c195c33-7dfa-43e5-b107-591d383b3d94 00:11:35.738 21:38:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.738 21:38:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.738 [2024-12-10 21:38:36.366895] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:35.738 [2024-12-10 21:38:36.367175] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:35.738 [2024-12-10 21:38:36.367190] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:35.738 [2024-12-10 21:38:36.367530] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:11:35.738 [2024-12-10 21:38:36.367729] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:35.738 [2024-12-10 21:38:36.367744] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:11:35.738 NewBaseBdev 00:11:35.738 [2024-12-10 21:38:36.367902] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:35.738 21:38:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.738 21:38:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:35.738 21:38:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:11:35.738 21:38:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:35.738 21:38:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:35.738 21:38:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:35.738 21:38:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:35.738 21:38:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:35.738 21:38:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.738 21:38:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.738 21:38:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.738 21:38:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:35.738 21:38:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.738 21:38:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.738 [ 00:11:35.738 { 00:11:35.738 "name": "NewBaseBdev", 00:11:35.738 "aliases": [ 00:11:35.738 "3c195c33-7dfa-43e5-b107-591d383b3d94" 00:11:35.738 ], 00:11:35.738 "product_name": "Malloc disk", 00:11:35.738 "block_size": 512, 00:11:35.738 "num_blocks": 65536, 00:11:35.738 "uuid": "3c195c33-7dfa-43e5-b107-591d383b3d94", 00:11:35.738 "assigned_rate_limits": { 00:11:35.738 "rw_ios_per_sec": 0, 00:11:35.738 "rw_mbytes_per_sec": 0, 00:11:35.738 "r_mbytes_per_sec": 0, 00:11:35.738 "w_mbytes_per_sec": 0 00:11:35.739 }, 00:11:35.739 "claimed": true, 00:11:35.739 "claim_type": "exclusive_write", 00:11:35.739 "zoned": false, 00:11:35.739 "supported_io_types": { 00:11:35.739 "read": true, 00:11:35.739 "write": true, 00:11:35.739 "unmap": true, 00:11:35.739 "flush": true, 00:11:35.739 "reset": true, 00:11:35.739 "nvme_admin": false, 00:11:35.739 "nvme_io": false, 00:11:35.739 "nvme_io_md": false, 00:11:35.739 "write_zeroes": true, 00:11:35.739 "zcopy": true, 00:11:35.739 "get_zone_info": false, 00:11:35.739 "zone_management": false, 00:11:35.739 "zone_append": false, 00:11:35.739 "compare": false, 00:11:35.739 "compare_and_write": false, 00:11:35.739 "abort": true, 00:11:35.739 "seek_hole": false, 00:11:35.739 "seek_data": false, 00:11:35.739 "copy": true, 00:11:35.739 "nvme_iov_md": false 00:11:35.739 }, 00:11:35.739 "memory_domains": [ 00:11:35.739 { 00:11:35.739 "dma_device_id": "system", 00:11:35.739 "dma_device_type": 1 00:11:35.739 }, 00:11:35.739 { 00:11:35.739 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:35.739 "dma_device_type": 2 00:11:35.739 } 00:11:35.739 ], 00:11:35.739 "driver_specific": {} 00:11:35.739 } 00:11:35.739 ] 00:11:35.739 21:38:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.739 21:38:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:35.739 21:38:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:11:35.739 21:38:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:35.739 21:38:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:35.739 21:38:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:35.739 21:38:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:35.739 21:38:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:35.739 21:38:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:35.739 21:38:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:35.739 21:38:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:35.739 21:38:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:35.739 21:38:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:35.739 21:38:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:35.739 21:38:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.739 21:38:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.739 21:38:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.739 21:38:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:35.739 "name": "Existed_Raid", 00:11:35.739 "uuid": "8eebf933-3448-4fb9-ae88-66fd8d2ba1a5", 00:11:35.739 "strip_size_kb": 0, 00:11:35.739 "state": "online", 00:11:35.739 "raid_level": "raid1", 00:11:35.739 "superblock": true, 00:11:35.739 "num_base_bdevs": 3, 00:11:35.739 "num_base_bdevs_discovered": 3, 00:11:35.739 "num_base_bdevs_operational": 3, 00:11:35.739 "base_bdevs_list": [ 00:11:35.739 { 00:11:35.739 "name": "NewBaseBdev", 00:11:35.739 "uuid": "3c195c33-7dfa-43e5-b107-591d383b3d94", 00:11:35.739 "is_configured": true, 00:11:35.739 "data_offset": 2048, 00:11:35.739 "data_size": 63488 00:11:35.739 }, 00:11:35.739 { 00:11:35.739 "name": "BaseBdev2", 00:11:35.739 "uuid": "70e29a04-9b6f-4c0c-8f4b-cf9025a00b3a", 00:11:35.739 "is_configured": true, 00:11:35.739 "data_offset": 2048, 00:11:35.739 "data_size": 63488 00:11:35.739 }, 00:11:35.739 { 00:11:35.739 "name": "BaseBdev3", 00:11:35.739 "uuid": "f47e5cc1-5b38-4059-aa51-006034961f60", 00:11:35.739 "is_configured": true, 00:11:35.739 "data_offset": 2048, 00:11:35.739 "data_size": 63488 00:11:35.739 } 00:11:35.739 ] 00:11:35.739 }' 00:11:35.739 21:38:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:35.739 21:38:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.305 21:38:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:36.305 21:38:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:36.305 21:38:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:36.305 21:38:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:36.305 21:38:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:36.305 21:38:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:36.305 21:38:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:36.305 21:38:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:36.305 21:38:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.305 21:38:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.305 [2024-12-10 21:38:36.874476] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:36.305 21:38:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.305 21:38:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:36.305 "name": "Existed_Raid", 00:11:36.305 "aliases": [ 00:11:36.305 "8eebf933-3448-4fb9-ae88-66fd8d2ba1a5" 00:11:36.305 ], 00:11:36.305 "product_name": "Raid Volume", 00:11:36.305 "block_size": 512, 00:11:36.305 "num_blocks": 63488, 00:11:36.305 "uuid": "8eebf933-3448-4fb9-ae88-66fd8d2ba1a5", 00:11:36.305 "assigned_rate_limits": { 00:11:36.305 "rw_ios_per_sec": 0, 00:11:36.305 "rw_mbytes_per_sec": 0, 00:11:36.305 "r_mbytes_per_sec": 0, 00:11:36.305 "w_mbytes_per_sec": 0 00:11:36.305 }, 00:11:36.305 "claimed": false, 00:11:36.305 "zoned": false, 00:11:36.305 "supported_io_types": { 00:11:36.305 "read": true, 00:11:36.305 "write": true, 00:11:36.305 "unmap": false, 00:11:36.305 "flush": false, 00:11:36.305 "reset": true, 00:11:36.305 "nvme_admin": false, 00:11:36.305 "nvme_io": false, 00:11:36.305 "nvme_io_md": false, 00:11:36.305 "write_zeroes": true, 00:11:36.305 "zcopy": false, 00:11:36.305 "get_zone_info": false, 00:11:36.305 "zone_management": false, 00:11:36.305 "zone_append": false, 00:11:36.305 "compare": false, 00:11:36.305 "compare_and_write": false, 00:11:36.305 "abort": false, 00:11:36.305 "seek_hole": false, 00:11:36.305 "seek_data": false, 00:11:36.305 "copy": false, 00:11:36.305 "nvme_iov_md": false 00:11:36.305 }, 00:11:36.305 "memory_domains": [ 00:11:36.305 { 00:11:36.305 "dma_device_id": "system", 00:11:36.305 "dma_device_type": 1 00:11:36.305 }, 00:11:36.305 { 00:11:36.305 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:36.305 "dma_device_type": 2 00:11:36.305 }, 00:11:36.305 { 00:11:36.305 "dma_device_id": "system", 00:11:36.305 "dma_device_type": 1 00:11:36.305 }, 00:11:36.305 { 00:11:36.305 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:36.305 "dma_device_type": 2 00:11:36.305 }, 00:11:36.305 { 00:11:36.305 "dma_device_id": "system", 00:11:36.305 "dma_device_type": 1 00:11:36.305 }, 00:11:36.305 { 00:11:36.305 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:36.305 "dma_device_type": 2 00:11:36.305 } 00:11:36.305 ], 00:11:36.305 "driver_specific": { 00:11:36.305 "raid": { 00:11:36.305 "uuid": "8eebf933-3448-4fb9-ae88-66fd8d2ba1a5", 00:11:36.305 "strip_size_kb": 0, 00:11:36.305 "state": "online", 00:11:36.305 "raid_level": "raid1", 00:11:36.305 "superblock": true, 00:11:36.305 "num_base_bdevs": 3, 00:11:36.305 "num_base_bdevs_discovered": 3, 00:11:36.305 "num_base_bdevs_operational": 3, 00:11:36.305 "base_bdevs_list": [ 00:11:36.305 { 00:11:36.305 "name": "NewBaseBdev", 00:11:36.305 "uuid": "3c195c33-7dfa-43e5-b107-591d383b3d94", 00:11:36.305 "is_configured": true, 00:11:36.305 "data_offset": 2048, 00:11:36.305 "data_size": 63488 00:11:36.305 }, 00:11:36.305 { 00:11:36.305 "name": "BaseBdev2", 00:11:36.305 "uuid": "70e29a04-9b6f-4c0c-8f4b-cf9025a00b3a", 00:11:36.305 "is_configured": true, 00:11:36.305 "data_offset": 2048, 00:11:36.305 "data_size": 63488 00:11:36.305 }, 00:11:36.305 { 00:11:36.305 "name": "BaseBdev3", 00:11:36.305 "uuid": "f47e5cc1-5b38-4059-aa51-006034961f60", 00:11:36.305 "is_configured": true, 00:11:36.305 "data_offset": 2048, 00:11:36.305 "data_size": 63488 00:11:36.305 } 00:11:36.305 ] 00:11:36.305 } 00:11:36.305 } 00:11:36.305 }' 00:11:36.305 21:38:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:36.305 21:38:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:36.305 BaseBdev2 00:11:36.305 BaseBdev3' 00:11:36.305 21:38:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:36.305 21:38:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:36.305 21:38:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:36.305 21:38:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:36.305 21:38:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:36.305 21:38:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.305 21:38:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.305 21:38:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.305 21:38:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:36.305 21:38:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:36.305 21:38:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:36.305 21:38:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:36.305 21:38:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.305 21:38:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.305 21:38:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:36.305 21:38:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.564 21:38:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:36.564 21:38:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:36.564 21:38:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:36.564 21:38:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:36.564 21:38:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:36.564 21:38:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.564 21:38:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.564 21:38:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.564 21:38:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:36.564 21:38:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:36.564 21:38:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:36.564 21:38:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.564 21:38:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.564 [2024-12-10 21:38:37.157650] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:36.564 [2024-12-10 21:38:37.157742] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:36.564 [2024-12-10 21:38:37.157852] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:36.564 [2024-12-10 21:38:37.158171] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:36.564 [2024-12-10 21:38:37.158231] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:11:36.564 21:38:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.564 21:38:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 68140 00:11:36.564 21:38:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 68140 ']' 00:11:36.564 21:38:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 68140 00:11:36.564 21:38:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:11:36.564 21:38:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:36.564 21:38:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68140 00:11:36.564 killing process with pid 68140 00:11:36.564 21:38:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:36.564 21:38:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:36.564 21:38:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68140' 00:11:36.564 21:38:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 68140 00:11:36.564 [2024-12-10 21:38:37.202944] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:36.564 21:38:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 68140 00:11:36.821 [2024-12-10 21:38:37.526161] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:38.197 ************************************ 00:11:38.197 END TEST raid_state_function_test_sb 00:11:38.197 ************************************ 00:11:38.197 21:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:11:38.197 00:11:38.197 real 0m10.785s 00:11:38.197 user 0m17.125s 00:11:38.197 sys 0m1.847s 00:11:38.197 21:38:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:38.197 21:38:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.197 21:38:38 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 3 00:11:38.197 21:38:38 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:38.197 21:38:38 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:38.197 21:38:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:38.197 ************************************ 00:11:38.197 START TEST raid_superblock_test 00:11:38.197 ************************************ 00:11:38.197 21:38:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 3 00:11:38.197 21:38:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:11:38.197 21:38:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:11:38.197 21:38:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:11:38.197 21:38:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:11:38.197 21:38:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:11:38.197 21:38:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:11:38.197 21:38:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:11:38.197 21:38:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:11:38.197 21:38:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:11:38.197 21:38:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:11:38.197 21:38:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:11:38.197 21:38:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:11:38.197 21:38:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:11:38.197 21:38:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:11:38.197 21:38:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:11:38.197 21:38:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=68756 00:11:38.197 21:38:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 68756 00:11:38.197 21:38:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 68756 ']' 00:11:38.197 21:38:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:38.197 21:38:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:38.197 21:38:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:38.197 21:38:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:11:38.197 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:38.197 21:38:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:38.197 21:38:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.197 [2024-12-10 21:38:38.854745] Starting SPDK v25.01-pre git sha1 cec5ba284 / DPDK 24.03.0 initialization... 00:11:38.197 [2024-12-10 21:38:38.854960] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68756 ] 00:11:38.456 [2024-12-10 21:38:39.031331] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:38.456 [2024-12-10 21:38:39.152949] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:11:38.714 [2024-12-10 21:38:39.360345] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:38.714 [2024-12-10 21:38:39.360523] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:39.281 21:38:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:39.281 21:38:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:11:39.281 21:38:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:11:39.281 21:38:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:39.281 21:38:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:11:39.281 21:38:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:11:39.281 21:38:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:11:39.281 21:38:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:39.281 21:38:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:39.281 21:38:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:39.281 21:38:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:11:39.281 21:38:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.281 21:38:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.281 malloc1 00:11:39.281 21:38:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.281 21:38:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:39.281 21:38:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.281 21:38:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.281 [2024-12-10 21:38:39.813895] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:39.281 [2024-12-10 21:38:39.814020] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:39.281 [2024-12-10 21:38:39.814067] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:39.281 [2024-12-10 21:38:39.814126] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:39.281 [2024-12-10 21:38:39.816665] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:39.281 [2024-12-10 21:38:39.816754] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:39.281 pt1 00:11:39.281 21:38:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.281 21:38:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:39.281 21:38:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:39.281 21:38:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:11:39.281 21:38:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:11:39.281 21:38:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:11:39.281 21:38:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:39.281 21:38:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:39.281 21:38:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:39.281 21:38:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:11:39.281 21:38:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.281 21:38:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.281 malloc2 00:11:39.281 21:38:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.281 21:38:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:39.281 21:38:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.281 21:38:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.281 [2024-12-10 21:38:39.877849] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:39.281 [2024-12-10 21:38:39.877920] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:39.281 [2024-12-10 21:38:39.877964] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:11:39.281 [2024-12-10 21:38:39.877974] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:39.282 [2024-12-10 21:38:39.880340] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:39.282 [2024-12-10 21:38:39.880384] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:39.282 pt2 00:11:39.282 21:38:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.282 21:38:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:39.282 21:38:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:39.282 21:38:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:11:39.282 21:38:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:11:39.282 21:38:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:11:39.282 21:38:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:39.282 21:38:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:39.282 21:38:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:39.282 21:38:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:11:39.282 21:38:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.282 21:38:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.282 malloc3 00:11:39.282 21:38:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.282 21:38:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:39.282 21:38:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.282 21:38:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.282 [2024-12-10 21:38:39.947621] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:39.282 [2024-12-10 21:38:39.947693] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:39.282 [2024-12-10 21:38:39.947721] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:11:39.282 [2024-12-10 21:38:39.947733] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:39.282 [2024-12-10 21:38:39.950143] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:39.282 [2024-12-10 21:38:39.950183] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:39.282 pt3 00:11:39.282 21:38:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.282 21:38:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:39.282 21:38:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:39.282 21:38:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:11:39.282 21:38:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.282 21:38:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.282 [2024-12-10 21:38:39.959629] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:39.282 [2024-12-10 21:38:39.961705] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:39.282 [2024-12-10 21:38:39.961860] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:39.282 [2024-12-10 21:38:39.962059] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:11:39.282 [2024-12-10 21:38:39.962081] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:39.282 [2024-12-10 21:38:39.962392] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:11:39.282 [2024-12-10 21:38:39.962618] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:11:39.282 [2024-12-10 21:38:39.962634] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:11:39.282 [2024-12-10 21:38:39.962812] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:39.282 21:38:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.282 21:38:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:39.282 21:38:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:39.282 21:38:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:39.282 21:38:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:39.282 21:38:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:39.282 21:38:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:39.282 21:38:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:39.282 21:38:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:39.282 21:38:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:39.282 21:38:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:39.282 21:38:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:39.282 21:38:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.282 21:38:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.282 21:38:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:39.282 21:38:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.282 21:38:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:39.282 "name": "raid_bdev1", 00:11:39.282 "uuid": "62a851f6-05dd-40fc-ae1a-c7acd5196195", 00:11:39.282 "strip_size_kb": 0, 00:11:39.282 "state": "online", 00:11:39.282 "raid_level": "raid1", 00:11:39.282 "superblock": true, 00:11:39.282 "num_base_bdevs": 3, 00:11:39.282 "num_base_bdevs_discovered": 3, 00:11:39.282 "num_base_bdevs_operational": 3, 00:11:39.282 "base_bdevs_list": [ 00:11:39.282 { 00:11:39.282 "name": "pt1", 00:11:39.282 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:39.282 "is_configured": true, 00:11:39.282 "data_offset": 2048, 00:11:39.282 "data_size": 63488 00:11:39.282 }, 00:11:39.282 { 00:11:39.282 "name": "pt2", 00:11:39.282 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:39.282 "is_configured": true, 00:11:39.282 "data_offset": 2048, 00:11:39.282 "data_size": 63488 00:11:39.282 }, 00:11:39.282 { 00:11:39.282 "name": "pt3", 00:11:39.282 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:39.282 "is_configured": true, 00:11:39.282 "data_offset": 2048, 00:11:39.282 "data_size": 63488 00:11:39.282 } 00:11:39.282 ] 00:11:39.282 }' 00:11:39.282 21:38:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:39.282 21:38:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.851 21:38:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:11:39.851 21:38:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:39.851 21:38:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:39.851 21:38:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:39.851 21:38:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:39.851 21:38:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:39.851 21:38:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:39.851 21:38:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.851 21:38:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.851 21:38:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:39.851 [2024-12-10 21:38:40.407137] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:39.851 21:38:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.851 21:38:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:39.851 "name": "raid_bdev1", 00:11:39.851 "aliases": [ 00:11:39.851 "62a851f6-05dd-40fc-ae1a-c7acd5196195" 00:11:39.851 ], 00:11:39.851 "product_name": "Raid Volume", 00:11:39.851 "block_size": 512, 00:11:39.851 "num_blocks": 63488, 00:11:39.851 "uuid": "62a851f6-05dd-40fc-ae1a-c7acd5196195", 00:11:39.851 "assigned_rate_limits": { 00:11:39.851 "rw_ios_per_sec": 0, 00:11:39.851 "rw_mbytes_per_sec": 0, 00:11:39.851 "r_mbytes_per_sec": 0, 00:11:39.851 "w_mbytes_per_sec": 0 00:11:39.851 }, 00:11:39.851 "claimed": false, 00:11:39.851 "zoned": false, 00:11:39.851 "supported_io_types": { 00:11:39.851 "read": true, 00:11:39.851 "write": true, 00:11:39.851 "unmap": false, 00:11:39.851 "flush": false, 00:11:39.851 "reset": true, 00:11:39.851 "nvme_admin": false, 00:11:39.851 "nvme_io": false, 00:11:39.851 "nvme_io_md": false, 00:11:39.851 "write_zeroes": true, 00:11:39.851 "zcopy": false, 00:11:39.851 "get_zone_info": false, 00:11:39.851 "zone_management": false, 00:11:39.851 "zone_append": false, 00:11:39.851 "compare": false, 00:11:39.851 "compare_and_write": false, 00:11:39.851 "abort": false, 00:11:39.851 "seek_hole": false, 00:11:39.851 "seek_data": false, 00:11:39.851 "copy": false, 00:11:39.851 "nvme_iov_md": false 00:11:39.851 }, 00:11:39.851 "memory_domains": [ 00:11:39.851 { 00:11:39.851 "dma_device_id": "system", 00:11:39.851 "dma_device_type": 1 00:11:39.851 }, 00:11:39.851 { 00:11:39.851 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:39.851 "dma_device_type": 2 00:11:39.851 }, 00:11:39.851 { 00:11:39.851 "dma_device_id": "system", 00:11:39.851 "dma_device_type": 1 00:11:39.851 }, 00:11:39.851 { 00:11:39.851 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:39.851 "dma_device_type": 2 00:11:39.851 }, 00:11:39.851 { 00:11:39.851 "dma_device_id": "system", 00:11:39.851 "dma_device_type": 1 00:11:39.851 }, 00:11:39.851 { 00:11:39.851 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:39.851 "dma_device_type": 2 00:11:39.851 } 00:11:39.851 ], 00:11:39.851 "driver_specific": { 00:11:39.851 "raid": { 00:11:39.851 "uuid": "62a851f6-05dd-40fc-ae1a-c7acd5196195", 00:11:39.851 "strip_size_kb": 0, 00:11:39.851 "state": "online", 00:11:39.851 "raid_level": "raid1", 00:11:39.851 "superblock": true, 00:11:39.851 "num_base_bdevs": 3, 00:11:39.851 "num_base_bdevs_discovered": 3, 00:11:39.851 "num_base_bdevs_operational": 3, 00:11:39.851 "base_bdevs_list": [ 00:11:39.851 { 00:11:39.851 "name": "pt1", 00:11:39.851 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:39.851 "is_configured": true, 00:11:39.851 "data_offset": 2048, 00:11:39.851 "data_size": 63488 00:11:39.851 }, 00:11:39.851 { 00:11:39.851 "name": "pt2", 00:11:39.851 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:39.851 "is_configured": true, 00:11:39.851 "data_offset": 2048, 00:11:39.851 "data_size": 63488 00:11:39.851 }, 00:11:39.851 { 00:11:39.851 "name": "pt3", 00:11:39.851 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:39.851 "is_configured": true, 00:11:39.851 "data_offset": 2048, 00:11:39.851 "data_size": 63488 00:11:39.851 } 00:11:39.851 ] 00:11:39.851 } 00:11:39.851 } 00:11:39.851 }' 00:11:39.851 21:38:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:39.851 21:38:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:39.851 pt2 00:11:39.851 pt3' 00:11:39.851 21:38:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:39.851 21:38:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:39.851 21:38:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:39.851 21:38:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:39.851 21:38:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.851 21:38:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.851 21:38:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:39.851 21:38:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.851 21:38:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:39.851 21:38:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:39.851 21:38:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:39.851 21:38:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:39.851 21:38:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:39.852 21:38:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.852 21:38:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.852 21:38:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.852 21:38:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:39.852 21:38:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:39.852 21:38:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:39.852 21:38:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:39.852 21:38:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:39.852 21:38:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.852 21:38:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.111 21:38:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.111 21:38:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:40.111 21:38:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:40.111 21:38:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:40.111 21:38:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.111 21:38:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.111 21:38:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:11:40.111 [2024-12-10 21:38:40.670743] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:40.111 21:38:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.111 21:38:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=62a851f6-05dd-40fc-ae1a-c7acd5196195 00:11:40.111 21:38:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 62a851f6-05dd-40fc-ae1a-c7acd5196195 ']' 00:11:40.111 21:38:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:40.111 21:38:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.111 21:38:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.111 [2024-12-10 21:38:40.722326] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:40.111 [2024-12-10 21:38:40.722358] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:40.111 [2024-12-10 21:38:40.722467] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:40.111 [2024-12-10 21:38:40.722549] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:40.111 [2024-12-10 21:38:40.722559] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:11:40.111 21:38:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.111 21:38:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:40.111 21:38:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:11:40.111 21:38:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.111 21:38:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.111 21:38:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.111 21:38:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:11:40.111 21:38:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:11:40.111 21:38:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:40.111 21:38:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:11:40.111 21:38:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.111 21:38:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.111 21:38:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.111 21:38:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:40.111 21:38:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:11:40.111 21:38:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.111 21:38:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.111 21:38:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.111 21:38:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:40.111 21:38:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:11:40.111 21:38:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.111 21:38:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.111 21:38:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.111 21:38:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:11:40.111 21:38:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.111 21:38:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:11:40.111 21:38:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.111 21:38:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.111 21:38:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:11:40.111 21:38:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:11:40.111 21:38:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:11:40.111 21:38:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:11:40.111 21:38:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:11:40.111 21:38:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:40.111 21:38:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:11:40.111 21:38:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:40.111 21:38:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:11:40.111 21:38:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.111 21:38:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.111 [2024-12-10 21:38:40.870157] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:11:40.111 [2024-12-10 21:38:40.872248] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:11:40.111 [2024-12-10 21:38:40.872376] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:11:40.111 [2024-12-10 21:38:40.872457] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:11:40.111 [2024-12-10 21:38:40.872513] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:11:40.111 [2024-12-10 21:38:40.872535] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:11:40.111 [2024-12-10 21:38:40.872553] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:40.111 [2024-12-10 21:38:40.872564] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:11:40.111 request: 00:11:40.111 { 00:11:40.111 "name": "raid_bdev1", 00:11:40.111 "raid_level": "raid1", 00:11:40.111 "base_bdevs": [ 00:11:40.111 "malloc1", 00:11:40.111 "malloc2", 00:11:40.111 "malloc3" 00:11:40.111 ], 00:11:40.111 "superblock": false, 00:11:40.111 "method": "bdev_raid_create", 00:11:40.111 "req_id": 1 00:11:40.111 } 00:11:40.111 Got JSON-RPC error response 00:11:40.111 response: 00:11:40.111 { 00:11:40.111 "code": -17, 00:11:40.111 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:11:40.111 } 00:11:40.111 21:38:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:11:40.111 21:38:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:11:40.111 21:38:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:40.111 21:38:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:40.111 21:38:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:40.111 21:38:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:40.111 21:38:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.111 21:38:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.111 21:38:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:11:40.111 21:38:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.370 21:38:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:11:40.370 21:38:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:11:40.370 21:38:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:40.370 21:38:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.370 21:38:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.370 [2024-12-10 21:38:40.933991] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:40.370 [2024-12-10 21:38:40.934110] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:40.370 [2024-12-10 21:38:40.934140] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:11:40.370 [2024-12-10 21:38:40.934151] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:40.370 [2024-12-10 21:38:40.936737] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:40.370 [2024-12-10 21:38:40.936780] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:40.370 [2024-12-10 21:38:40.936891] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:11:40.370 [2024-12-10 21:38:40.936955] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:40.370 pt1 00:11:40.370 21:38:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.370 21:38:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:11:40.370 21:38:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:40.370 21:38:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:40.370 21:38:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:40.370 21:38:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:40.370 21:38:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:40.370 21:38:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:40.370 21:38:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:40.370 21:38:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:40.370 21:38:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:40.370 21:38:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:40.370 21:38:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.370 21:38:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:40.370 21:38:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.370 21:38:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.370 21:38:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:40.370 "name": "raid_bdev1", 00:11:40.370 "uuid": "62a851f6-05dd-40fc-ae1a-c7acd5196195", 00:11:40.370 "strip_size_kb": 0, 00:11:40.370 "state": "configuring", 00:11:40.370 "raid_level": "raid1", 00:11:40.370 "superblock": true, 00:11:40.370 "num_base_bdevs": 3, 00:11:40.370 "num_base_bdevs_discovered": 1, 00:11:40.370 "num_base_bdevs_operational": 3, 00:11:40.370 "base_bdevs_list": [ 00:11:40.370 { 00:11:40.370 "name": "pt1", 00:11:40.370 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:40.370 "is_configured": true, 00:11:40.370 "data_offset": 2048, 00:11:40.370 "data_size": 63488 00:11:40.370 }, 00:11:40.370 { 00:11:40.370 "name": null, 00:11:40.370 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:40.370 "is_configured": false, 00:11:40.370 "data_offset": 2048, 00:11:40.370 "data_size": 63488 00:11:40.370 }, 00:11:40.370 { 00:11:40.370 "name": null, 00:11:40.370 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:40.370 "is_configured": false, 00:11:40.370 "data_offset": 2048, 00:11:40.370 "data_size": 63488 00:11:40.370 } 00:11:40.370 ] 00:11:40.370 }' 00:11:40.370 21:38:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:40.370 21:38:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.628 21:38:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:11:40.629 21:38:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:40.629 21:38:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.629 21:38:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.629 [2024-12-10 21:38:41.361288] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:40.629 [2024-12-10 21:38:41.361367] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:40.629 [2024-12-10 21:38:41.361392] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:11:40.629 [2024-12-10 21:38:41.361402] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:40.629 [2024-12-10 21:38:41.361900] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:40.629 [2024-12-10 21:38:41.361927] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:40.629 [2024-12-10 21:38:41.362019] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:40.629 [2024-12-10 21:38:41.362043] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:40.629 pt2 00:11:40.629 21:38:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.629 21:38:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:11:40.629 21:38:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.629 21:38:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.629 [2024-12-10 21:38:41.369274] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:11:40.629 21:38:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.629 21:38:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:11:40.629 21:38:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:40.629 21:38:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:40.629 21:38:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:40.629 21:38:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:40.629 21:38:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:40.629 21:38:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:40.629 21:38:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:40.629 21:38:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:40.629 21:38:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:40.629 21:38:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:40.629 21:38:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:40.629 21:38:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.629 21:38:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.629 21:38:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.888 21:38:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:40.888 "name": "raid_bdev1", 00:11:40.888 "uuid": "62a851f6-05dd-40fc-ae1a-c7acd5196195", 00:11:40.888 "strip_size_kb": 0, 00:11:40.888 "state": "configuring", 00:11:40.888 "raid_level": "raid1", 00:11:40.888 "superblock": true, 00:11:40.888 "num_base_bdevs": 3, 00:11:40.888 "num_base_bdevs_discovered": 1, 00:11:40.888 "num_base_bdevs_operational": 3, 00:11:40.888 "base_bdevs_list": [ 00:11:40.888 { 00:11:40.888 "name": "pt1", 00:11:40.888 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:40.888 "is_configured": true, 00:11:40.888 "data_offset": 2048, 00:11:40.888 "data_size": 63488 00:11:40.888 }, 00:11:40.888 { 00:11:40.888 "name": null, 00:11:40.888 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:40.888 "is_configured": false, 00:11:40.888 "data_offset": 0, 00:11:40.888 "data_size": 63488 00:11:40.888 }, 00:11:40.888 { 00:11:40.888 "name": null, 00:11:40.888 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:40.888 "is_configured": false, 00:11:40.888 "data_offset": 2048, 00:11:40.888 "data_size": 63488 00:11:40.888 } 00:11:40.888 ] 00:11:40.888 }' 00:11:40.888 21:38:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:40.888 21:38:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.147 21:38:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:11:41.147 21:38:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:41.147 21:38:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:41.147 21:38:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.147 21:38:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.147 [2024-12-10 21:38:41.824539] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:41.147 [2024-12-10 21:38:41.824625] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:41.147 [2024-12-10 21:38:41.824648] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:11:41.147 [2024-12-10 21:38:41.824662] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:41.147 [2024-12-10 21:38:41.825164] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:41.147 [2024-12-10 21:38:41.825186] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:41.147 [2024-12-10 21:38:41.825275] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:41.147 [2024-12-10 21:38:41.825312] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:41.147 pt2 00:11:41.147 21:38:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.147 21:38:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:41.147 21:38:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:41.147 21:38:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:41.147 21:38:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.147 21:38:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.147 [2024-12-10 21:38:41.832500] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:41.147 [2024-12-10 21:38:41.832558] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:41.147 [2024-12-10 21:38:41.832576] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:41.147 [2024-12-10 21:38:41.832587] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:41.147 [2024-12-10 21:38:41.833017] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:41.147 [2024-12-10 21:38:41.833048] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:41.147 [2024-12-10 21:38:41.833123] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:11:41.147 [2024-12-10 21:38:41.833147] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:41.147 [2024-12-10 21:38:41.833295] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:41.147 [2024-12-10 21:38:41.833310] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:41.147 [2024-12-10 21:38:41.833583] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:41.147 [2024-12-10 21:38:41.833749] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:41.147 [2024-12-10 21:38:41.833758] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:11:41.147 [2024-12-10 21:38:41.833907] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:41.147 pt3 00:11:41.147 21:38:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.147 21:38:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:41.147 21:38:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:41.147 21:38:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:41.147 21:38:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:41.147 21:38:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:41.147 21:38:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:41.147 21:38:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:41.147 21:38:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:41.147 21:38:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:41.147 21:38:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:41.147 21:38:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:41.147 21:38:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:41.147 21:38:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:41.147 21:38:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.147 21:38:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.147 21:38:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:41.147 21:38:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.147 21:38:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:41.147 "name": "raid_bdev1", 00:11:41.147 "uuid": "62a851f6-05dd-40fc-ae1a-c7acd5196195", 00:11:41.147 "strip_size_kb": 0, 00:11:41.147 "state": "online", 00:11:41.147 "raid_level": "raid1", 00:11:41.147 "superblock": true, 00:11:41.147 "num_base_bdevs": 3, 00:11:41.147 "num_base_bdevs_discovered": 3, 00:11:41.147 "num_base_bdevs_operational": 3, 00:11:41.147 "base_bdevs_list": [ 00:11:41.147 { 00:11:41.147 "name": "pt1", 00:11:41.147 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:41.147 "is_configured": true, 00:11:41.147 "data_offset": 2048, 00:11:41.147 "data_size": 63488 00:11:41.147 }, 00:11:41.147 { 00:11:41.147 "name": "pt2", 00:11:41.147 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:41.147 "is_configured": true, 00:11:41.147 "data_offset": 2048, 00:11:41.147 "data_size": 63488 00:11:41.147 }, 00:11:41.147 { 00:11:41.147 "name": "pt3", 00:11:41.147 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:41.147 "is_configured": true, 00:11:41.147 "data_offset": 2048, 00:11:41.147 "data_size": 63488 00:11:41.147 } 00:11:41.147 ] 00:11:41.147 }' 00:11:41.147 21:38:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:41.147 21:38:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.715 21:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:11:41.715 21:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:41.715 21:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:41.715 21:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:41.715 21:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:41.715 21:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:41.715 21:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:41.715 21:38:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.715 21:38:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.715 21:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:41.715 [2024-12-10 21:38:42.316047] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:41.715 21:38:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.715 21:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:41.715 "name": "raid_bdev1", 00:11:41.715 "aliases": [ 00:11:41.715 "62a851f6-05dd-40fc-ae1a-c7acd5196195" 00:11:41.715 ], 00:11:41.715 "product_name": "Raid Volume", 00:11:41.715 "block_size": 512, 00:11:41.715 "num_blocks": 63488, 00:11:41.715 "uuid": "62a851f6-05dd-40fc-ae1a-c7acd5196195", 00:11:41.715 "assigned_rate_limits": { 00:11:41.715 "rw_ios_per_sec": 0, 00:11:41.715 "rw_mbytes_per_sec": 0, 00:11:41.715 "r_mbytes_per_sec": 0, 00:11:41.715 "w_mbytes_per_sec": 0 00:11:41.715 }, 00:11:41.715 "claimed": false, 00:11:41.715 "zoned": false, 00:11:41.715 "supported_io_types": { 00:11:41.715 "read": true, 00:11:41.715 "write": true, 00:11:41.715 "unmap": false, 00:11:41.715 "flush": false, 00:11:41.715 "reset": true, 00:11:41.715 "nvme_admin": false, 00:11:41.715 "nvme_io": false, 00:11:41.715 "nvme_io_md": false, 00:11:41.715 "write_zeroes": true, 00:11:41.715 "zcopy": false, 00:11:41.715 "get_zone_info": false, 00:11:41.715 "zone_management": false, 00:11:41.715 "zone_append": false, 00:11:41.715 "compare": false, 00:11:41.715 "compare_and_write": false, 00:11:41.715 "abort": false, 00:11:41.715 "seek_hole": false, 00:11:41.715 "seek_data": false, 00:11:41.715 "copy": false, 00:11:41.715 "nvme_iov_md": false 00:11:41.715 }, 00:11:41.715 "memory_domains": [ 00:11:41.715 { 00:11:41.715 "dma_device_id": "system", 00:11:41.715 "dma_device_type": 1 00:11:41.715 }, 00:11:41.715 { 00:11:41.715 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:41.715 "dma_device_type": 2 00:11:41.715 }, 00:11:41.715 { 00:11:41.715 "dma_device_id": "system", 00:11:41.715 "dma_device_type": 1 00:11:41.715 }, 00:11:41.715 { 00:11:41.715 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:41.715 "dma_device_type": 2 00:11:41.715 }, 00:11:41.715 { 00:11:41.715 "dma_device_id": "system", 00:11:41.715 "dma_device_type": 1 00:11:41.715 }, 00:11:41.715 { 00:11:41.715 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:41.715 "dma_device_type": 2 00:11:41.715 } 00:11:41.715 ], 00:11:41.715 "driver_specific": { 00:11:41.715 "raid": { 00:11:41.715 "uuid": "62a851f6-05dd-40fc-ae1a-c7acd5196195", 00:11:41.715 "strip_size_kb": 0, 00:11:41.715 "state": "online", 00:11:41.715 "raid_level": "raid1", 00:11:41.715 "superblock": true, 00:11:41.715 "num_base_bdevs": 3, 00:11:41.715 "num_base_bdevs_discovered": 3, 00:11:41.715 "num_base_bdevs_operational": 3, 00:11:41.715 "base_bdevs_list": [ 00:11:41.715 { 00:11:41.715 "name": "pt1", 00:11:41.715 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:41.715 "is_configured": true, 00:11:41.715 "data_offset": 2048, 00:11:41.715 "data_size": 63488 00:11:41.715 }, 00:11:41.715 { 00:11:41.715 "name": "pt2", 00:11:41.715 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:41.715 "is_configured": true, 00:11:41.715 "data_offset": 2048, 00:11:41.715 "data_size": 63488 00:11:41.715 }, 00:11:41.715 { 00:11:41.715 "name": "pt3", 00:11:41.715 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:41.715 "is_configured": true, 00:11:41.715 "data_offset": 2048, 00:11:41.715 "data_size": 63488 00:11:41.715 } 00:11:41.715 ] 00:11:41.715 } 00:11:41.715 } 00:11:41.715 }' 00:11:41.715 21:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:41.715 21:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:41.715 pt2 00:11:41.715 pt3' 00:11:41.715 21:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:41.715 21:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:41.716 21:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:41.716 21:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:41.716 21:38:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.716 21:38:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.716 21:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:41.716 21:38:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.975 21:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:41.975 21:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:41.975 21:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:41.975 21:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:41.975 21:38:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.975 21:38:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.975 21:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:41.975 21:38:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.975 21:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:41.975 21:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:41.975 21:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:41.975 21:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:41.975 21:38:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.975 21:38:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.975 21:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:41.975 21:38:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.975 21:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:41.975 21:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:41.975 21:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:11:41.975 21:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:41.975 21:38:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.975 21:38:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.975 [2024-12-10 21:38:42.591588] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:41.975 21:38:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.975 21:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 62a851f6-05dd-40fc-ae1a-c7acd5196195 '!=' 62a851f6-05dd-40fc-ae1a-c7acd5196195 ']' 00:11:41.975 21:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:11:41.975 21:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:41.975 21:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:11:41.975 21:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:11:41.975 21:38:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.975 21:38:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.975 [2024-12-10 21:38:42.639242] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:11:41.975 21:38:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.975 21:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:41.975 21:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:41.975 21:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:41.975 21:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:41.975 21:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:41.975 21:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:41.975 21:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:41.975 21:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:41.975 21:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:41.975 21:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:41.975 21:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:41.975 21:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:41.975 21:38:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.975 21:38:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.975 21:38:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.975 21:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:41.975 "name": "raid_bdev1", 00:11:41.975 "uuid": "62a851f6-05dd-40fc-ae1a-c7acd5196195", 00:11:41.975 "strip_size_kb": 0, 00:11:41.975 "state": "online", 00:11:41.975 "raid_level": "raid1", 00:11:41.975 "superblock": true, 00:11:41.975 "num_base_bdevs": 3, 00:11:41.975 "num_base_bdevs_discovered": 2, 00:11:41.975 "num_base_bdevs_operational": 2, 00:11:41.975 "base_bdevs_list": [ 00:11:41.975 { 00:11:41.975 "name": null, 00:11:41.975 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:41.975 "is_configured": false, 00:11:41.975 "data_offset": 0, 00:11:41.975 "data_size": 63488 00:11:41.975 }, 00:11:41.975 { 00:11:41.975 "name": "pt2", 00:11:41.975 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:41.975 "is_configured": true, 00:11:41.975 "data_offset": 2048, 00:11:41.975 "data_size": 63488 00:11:41.975 }, 00:11:41.975 { 00:11:41.975 "name": "pt3", 00:11:41.975 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:41.975 "is_configured": true, 00:11:41.975 "data_offset": 2048, 00:11:41.975 "data_size": 63488 00:11:41.975 } 00:11:41.975 ] 00:11:41.975 }' 00:11:41.975 21:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:41.975 21:38:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.542 21:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:42.542 21:38:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.542 21:38:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.542 [2024-12-10 21:38:43.086403] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:42.542 [2024-12-10 21:38:43.086491] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:42.542 [2024-12-10 21:38:43.086607] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:42.542 [2024-12-10 21:38:43.086688] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:42.542 [2024-12-10 21:38:43.086731] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:11:42.542 21:38:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.542 21:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:42.542 21:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:11:42.542 21:38:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.542 21:38:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.542 21:38:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.542 21:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:11:42.542 21:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:11:42.542 21:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:11:42.542 21:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:11:42.542 21:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:11:42.542 21:38:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.542 21:38:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.542 21:38:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.542 21:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:11:42.542 21:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:11:42.542 21:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:11:42.542 21:38:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.542 21:38:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.542 21:38:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.542 21:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:11:42.542 21:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:11:42.542 21:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:11:42.542 21:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:11:42.542 21:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:42.542 21:38:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.542 21:38:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.543 [2024-12-10 21:38:43.142263] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:42.543 [2024-12-10 21:38:43.142322] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:42.543 [2024-12-10 21:38:43.142356] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:11:42.543 [2024-12-10 21:38:43.142368] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:42.543 [2024-12-10 21:38:43.144816] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:42.543 [2024-12-10 21:38:43.144858] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:42.543 [2024-12-10 21:38:43.144937] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:42.543 [2024-12-10 21:38:43.144990] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:42.543 pt2 00:11:42.543 21:38:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.543 21:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:11:42.543 21:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:42.543 21:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:42.543 21:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:42.543 21:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:42.543 21:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:42.543 21:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:42.543 21:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:42.543 21:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:42.543 21:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:42.543 21:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:42.543 21:38:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.543 21:38:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.543 21:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:42.543 21:38:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.543 21:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:42.543 "name": "raid_bdev1", 00:11:42.543 "uuid": "62a851f6-05dd-40fc-ae1a-c7acd5196195", 00:11:42.543 "strip_size_kb": 0, 00:11:42.543 "state": "configuring", 00:11:42.543 "raid_level": "raid1", 00:11:42.543 "superblock": true, 00:11:42.543 "num_base_bdevs": 3, 00:11:42.543 "num_base_bdevs_discovered": 1, 00:11:42.543 "num_base_bdevs_operational": 2, 00:11:42.543 "base_bdevs_list": [ 00:11:42.543 { 00:11:42.543 "name": null, 00:11:42.543 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:42.543 "is_configured": false, 00:11:42.543 "data_offset": 2048, 00:11:42.543 "data_size": 63488 00:11:42.543 }, 00:11:42.543 { 00:11:42.543 "name": "pt2", 00:11:42.543 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:42.543 "is_configured": true, 00:11:42.543 "data_offset": 2048, 00:11:42.543 "data_size": 63488 00:11:42.543 }, 00:11:42.543 { 00:11:42.543 "name": null, 00:11:42.543 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:42.543 "is_configured": false, 00:11:42.543 "data_offset": 2048, 00:11:42.543 "data_size": 63488 00:11:42.543 } 00:11:42.543 ] 00:11:42.543 }' 00:11:42.543 21:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:42.543 21:38:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.801 21:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:11:42.801 21:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:11:42.801 21:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:11:42.801 21:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:42.801 21:38:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.801 21:38:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.801 [2024-12-10 21:38:43.577573] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:42.801 [2024-12-10 21:38:43.577643] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:42.801 [2024-12-10 21:38:43.577664] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:11:42.801 [2024-12-10 21:38:43.577676] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:42.801 [2024-12-10 21:38:43.578135] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:42.801 [2024-12-10 21:38:43.578156] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:42.801 [2024-12-10 21:38:43.578267] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:11:42.801 [2024-12-10 21:38:43.578303] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:42.801 [2024-12-10 21:38:43.578481] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:42.801 [2024-12-10 21:38:43.578496] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:42.801 [2024-12-10 21:38:43.578809] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:11:42.801 [2024-12-10 21:38:43.578974] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:42.801 [2024-12-10 21:38:43.578985] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:11:42.801 [2024-12-10 21:38:43.579129] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:42.801 pt3 00:11:42.801 21:38:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.801 21:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:42.801 21:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:43.059 21:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:43.059 21:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:43.059 21:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:43.059 21:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:43.059 21:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:43.059 21:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:43.059 21:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:43.059 21:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:43.059 21:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:43.059 21:38:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.059 21:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:43.059 21:38:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.059 21:38:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.059 21:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:43.059 "name": "raid_bdev1", 00:11:43.059 "uuid": "62a851f6-05dd-40fc-ae1a-c7acd5196195", 00:11:43.059 "strip_size_kb": 0, 00:11:43.059 "state": "online", 00:11:43.059 "raid_level": "raid1", 00:11:43.059 "superblock": true, 00:11:43.059 "num_base_bdevs": 3, 00:11:43.059 "num_base_bdevs_discovered": 2, 00:11:43.059 "num_base_bdevs_operational": 2, 00:11:43.059 "base_bdevs_list": [ 00:11:43.059 { 00:11:43.059 "name": null, 00:11:43.059 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:43.059 "is_configured": false, 00:11:43.059 "data_offset": 2048, 00:11:43.059 "data_size": 63488 00:11:43.059 }, 00:11:43.059 { 00:11:43.059 "name": "pt2", 00:11:43.059 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:43.059 "is_configured": true, 00:11:43.059 "data_offset": 2048, 00:11:43.059 "data_size": 63488 00:11:43.059 }, 00:11:43.059 { 00:11:43.059 "name": "pt3", 00:11:43.059 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:43.059 "is_configured": true, 00:11:43.059 "data_offset": 2048, 00:11:43.059 "data_size": 63488 00:11:43.059 } 00:11:43.059 ] 00:11:43.059 }' 00:11:43.059 21:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:43.059 21:38:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.318 21:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:43.318 21:38:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.318 21:38:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.318 [2024-12-10 21:38:43.992943] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:43.318 [2024-12-10 21:38:43.993022] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:43.318 [2024-12-10 21:38:43.993132] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:43.318 [2024-12-10 21:38:43.993230] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:43.318 [2024-12-10 21:38:43.993268] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:11:43.318 21:38:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.318 21:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:43.318 21:38:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.318 21:38:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.318 21:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:11:43.318 21:38:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.318 21:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:11:43.318 21:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:11:43.318 21:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:11:43.318 21:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:11:43.318 21:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:11:43.318 21:38:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.318 21:38:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.318 21:38:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.318 21:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:43.318 21:38:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.318 21:38:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.318 [2024-12-10 21:38:44.068857] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:43.318 [2024-12-10 21:38:44.068995] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:43.318 [2024-12-10 21:38:44.069041] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:11:43.318 [2024-12-10 21:38:44.069075] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:43.318 [2024-12-10 21:38:44.071625] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:43.318 [2024-12-10 21:38:44.071707] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:43.318 [2024-12-10 21:38:44.071865] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:11:43.318 [2024-12-10 21:38:44.071954] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:43.318 [2024-12-10 21:38:44.072167] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:11:43.318 [2024-12-10 21:38:44.072228] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:43.318 [2024-12-10 21:38:44.072287] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:11:43.318 [2024-12-10 21:38:44.072396] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:43.318 pt1 00:11:43.318 21:38:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.318 21:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:11:43.318 21:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:11:43.318 21:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:43.318 21:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:43.318 21:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:43.318 21:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:43.318 21:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:43.318 21:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:43.318 21:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:43.318 21:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:43.318 21:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:43.318 21:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:43.318 21:38:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.318 21:38:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.318 21:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:43.318 21:38:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.577 21:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:43.577 "name": "raid_bdev1", 00:11:43.577 "uuid": "62a851f6-05dd-40fc-ae1a-c7acd5196195", 00:11:43.577 "strip_size_kb": 0, 00:11:43.577 "state": "configuring", 00:11:43.577 "raid_level": "raid1", 00:11:43.577 "superblock": true, 00:11:43.577 "num_base_bdevs": 3, 00:11:43.577 "num_base_bdevs_discovered": 1, 00:11:43.577 "num_base_bdevs_operational": 2, 00:11:43.577 "base_bdevs_list": [ 00:11:43.577 { 00:11:43.577 "name": null, 00:11:43.577 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:43.577 "is_configured": false, 00:11:43.577 "data_offset": 2048, 00:11:43.577 "data_size": 63488 00:11:43.577 }, 00:11:43.577 { 00:11:43.577 "name": "pt2", 00:11:43.577 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:43.577 "is_configured": true, 00:11:43.577 "data_offset": 2048, 00:11:43.577 "data_size": 63488 00:11:43.577 }, 00:11:43.577 { 00:11:43.577 "name": null, 00:11:43.577 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:43.577 "is_configured": false, 00:11:43.577 "data_offset": 2048, 00:11:43.577 "data_size": 63488 00:11:43.577 } 00:11:43.577 ] 00:11:43.577 }' 00:11:43.577 21:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:43.577 21:38:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.868 21:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:11:43.868 21:38:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.868 21:38:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.868 21:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:11:43.868 21:38:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.868 21:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:11:43.868 21:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:43.869 21:38:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.869 21:38:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.869 [2024-12-10 21:38:44.583976] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:43.869 [2024-12-10 21:38:44.584071] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:43.869 [2024-12-10 21:38:44.584097] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:11:43.869 [2024-12-10 21:38:44.584106] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:43.869 [2024-12-10 21:38:44.584652] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:43.869 [2024-12-10 21:38:44.584672] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:43.869 [2024-12-10 21:38:44.584773] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:11:43.869 [2024-12-10 21:38:44.584796] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:43.869 [2024-12-10 21:38:44.584920] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:11:43.869 [2024-12-10 21:38:44.584929] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:43.869 [2024-12-10 21:38:44.585171] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:11:43.869 [2024-12-10 21:38:44.585327] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:11:43.869 [2024-12-10 21:38:44.585341] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:11:43.869 [2024-12-10 21:38:44.585494] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:43.869 pt3 00:11:43.869 21:38:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.869 21:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:43.869 21:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:43.869 21:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:43.869 21:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:43.869 21:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:43.869 21:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:43.869 21:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:43.869 21:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:43.869 21:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:43.869 21:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:43.869 21:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:43.869 21:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:43.869 21:38:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.869 21:38:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.869 21:38:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.869 21:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:43.869 "name": "raid_bdev1", 00:11:43.869 "uuid": "62a851f6-05dd-40fc-ae1a-c7acd5196195", 00:11:43.869 "strip_size_kb": 0, 00:11:43.869 "state": "online", 00:11:43.869 "raid_level": "raid1", 00:11:43.869 "superblock": true, 00:11:43.869 "num_base_bdevs": 3, 00:11:43.869 "num_base_bdevs_discovered": 2, 00:11:43.869 "num_base_bdevs_operational": 2, 00:11:43.869 "base_bdevs_list": [ 00:11:43.869 { 00:11:43.869 "name": null, 00:11:43.869 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:43.869 "is_configured": false, 00:11:43.869 "data_offset": 2048, 00:11:43.869 "data_size": 63488 00:11:43.869 }, 00:11:43.869 { 00:11:43.869 "name": "pt2", 00:11:43.869 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:43.869 "is_configured": true, 00:11:43.869 "data_offset": 2048, 00:11:43.869 "data_size": 63488 00:11:43.869 }, 00:11:43.869 { 00:11:43.869 "name": "pt3", 00:11:43.869 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:43.869 "is_configured": true, 00:11:43.869 "data_offset": 2048, 00:11:43.869 "data_size": 63488 00:11:43.869 } 00:11:43.869 ] 00:11:43.869 }' 00:11:43.869 21:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:43.869 21:38:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.535 21:38:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:11:44.535 21:38:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:11:44.535 21:38:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.535 21:38:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.535 21:38:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.535 21:38:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:11:44.535 21:38:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:44.535 21:38:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.535 21:38:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.535 21:38:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:11:44.535 [2024-12-10 21:38:45.111450] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:44.535 21:38:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.535 21:38:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 62a851f6-05dd-40fc-ae1a-c7acd5196195 '!=' 62a851f6-05dd-40fc-ae1a-c7acd5196195 ']' 00:11:44.535 21:38:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 68756 00:11:44.535 21:38:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 68756 ']' 00:11:44.535 21:38:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 68756 00:11:44.535 21:38:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:11:44.535 21:38:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:44.535 21:38:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68756 00:11:44.535 killing process with pid 68756 00:11:44.535 21:38:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:44.535 21:38:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:44.535 21:38:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68756' 00:11:44.535 21:38:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 68756 00:11:44.535 [2024-12-10 21:38:45.185762] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:44.535 21:38:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 68756 00:11:44.535 [2024-12-10 21:38:45.185872] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:44.535 [2024-12-10 21:38:45.185943] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:44.535 [2024-12-10 21:38:45.185957] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:11:44.793 [2024-12-10 21:38:45.505707] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:46.170 ************************************ 00:11:46.170 END TEST raid_superblock_test 00:11:46.170 ************************************ 00:11:46.170 21:38:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:11:46.170 00:11:46.170 real 0m7.948s 00:11:46.170 user 0m12.399s 00:11:46.170 sys 0m1.270s 00:11:46.170 21:38:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:46.170 21:38:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.170 21:38:46 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 3 read 00:11:46.170 21:38:46 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:46.170 21:38:46 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:46.170 21:38:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:46.170 ************************************ 00:11:46.170 START TEST raid_read_error_test 00:11:46.170 ************************************ 00:11:46.170 21:38:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 3 read 00:11:46.170 21:38:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:11:46.170 21:38:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:11:46.170 21:38:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:11:46.170 21:38:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:46.170 21:38:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:46.170 21:38:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:46.170 21:38:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:46.170 21:38:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:46.170 21:38:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:46.170 21:38:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:46.170 21:38:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:46.170 21:38:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:46.170 21:38:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:46.170 21:38:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:46.170 21:38:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:11:46.170 21:38:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:46.170 21:38:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:46.170 21:38:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:46.170 21:38:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:46.170 21:38:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:46.170 21:38:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:46.170 21:38:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:11:46.170 21:38:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:11:46.170 21:38:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:46.170 21:38:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.MhEVTgLfzf 00:11:46.170 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:46.170 21:38:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=69207 00:11:46.170 21:38:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 69207 00:11:46.170 21:38:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 69207 ']' 00:11:46.170 21:38:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:46.170 21:38:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:46.170 21:38:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:46.170 21:38:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:46.170 21:38:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:46.170 21:38:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.170 [2024-12-10 21:38:46.884664] Starting SPDK v25.01-pre git sha1 cec5ba284 / DPDK 24.03.0 initialization... 00:11:46.170 [2024-12-10 21:38:46.884870] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69207 ] 00:11:46.431 [2024-12-10 21:38:47.062278] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:46.431 [2024-12-10 21:38:47.185242] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:11:46.691 [2024-12-10 21:38:47.409680] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:46.691 [2024-12-10 21:38:47.409822] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:47.261 21:38:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:47.261 21:38:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:11:47.261 21:38:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:47.261 21:38:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:47.261 21:38:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.261 21:38:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.261 BaseBdev1_malloc 00:11:47.261 21:38:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.261 21:38:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:47.261 21:38:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.261 21:38:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.261 true 00:11:47.261 21:38:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.261 21:38:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:47.261 21:38:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.261 21:38:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.261 [2024-12-10 21:38:47.817410] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:47.261 [2024-12-10 21:38:47.817492] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:47.261 [2024-12-10 21:38:47.817517] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:47.261 [2024-12-10 21:38:47.817529] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:47.261 [2024-12-10 21:38:47.819869] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:47.261 [2024-12-10 21:38:47.819919] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:47.261 BaseBdev1 00:11:47.261 21:38:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.261 21:38:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:47.261 21:38:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:47.261 21:38:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.261 21:38:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.261 BaseBdev2_malloc 00:11:47.261 21:38:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.261 21:38:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:47.261 21:38:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.261 21:38:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.261 true 00:11:47.261 21:38:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.261 21:38:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:47.261 21:38:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.261 21:38:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.261 [2024-12-10 21:38:47.878896] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:47.261 [2024-12-10 21:38:47.878957] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:47.261 [2024-12-10 21:38:47.878974] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:47.261 [2024-12-10 21:38:47.878984] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:47.261 [2024-12-10 21:38:47.881150] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:47.261 [2024-12-10 21:38:47.881286] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:47.261 BaseBdev2 00:11:47.261 21:38:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.261 21:38:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:47.261 21:38:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:47.261 21:38:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.261 21:38:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.261 BaseBdev3_malloc 00:11:47.261 21:38:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.261 21:38:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:47.261 21:38:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.261 21:38:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.261 true 00:11:47.261 21:38:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.261 21:38:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:47.261 21:38:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.261 21:38:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.261 [2024-12-10 21:38:47.953375] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:47.261 [2024-12-10 21:38:47.953520] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:47.261 [2024-12-10 21:38:47.953547] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:47.261 [2024-12-10 21:38:47.953559] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:47.261 [2024-12-10 21:38:47.955895] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:47.261 [2024-12-10 21:38:47.955944] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:47.261 BaseBdev3 00:11:47.261 21:38:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.261 21:38:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:11:47.261 21:38:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.261 21:38:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.261 [2024-12-10 21:38:47.965426] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:47.261 [2024-12-10 21:38:47.967213] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:47.261 [2024-12-10 21:38:47.967307] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:47.261 [2024-12-10 21:38:47.967550] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:47.261 [2024-12-10 21:38:47.967565] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:47.261 [2024-12-10 21:38:47.967866] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:11:47.261 [2024-12-10 21:38:47.968052] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:47.261 [2024-12-10 21:38:47.968066] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:11:47.261 [2024-12-10 21:38:47.968235] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:47.262 21:38:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.262 21:38:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:47.262 21:38:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:47.262 21:38:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:47.262 21:38:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:47.262 21:38:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:47.262 21:38:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:47.262 21:38:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:47.262 21:38:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:47.262 21:38:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:47.262 21:38:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:47.262 21:38:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:47.262 21:38:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:47.262 21:38:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.262 21:38:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.262 21:38:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.262 21:38:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:47.262 "name": "raid_bdev1", 00:11:47.262 "uuid": "029cdf5d-b5a3-4e11-a515-3fde21241de1", 00:11:47.262 "strip_size_kb": 0, 00:11:47.262 "state": "online", 00:11:47.262 "raid_level": "raid1", 00:11:47.262 "superblock": true, 00:11:47.262 "num_base_bdevs": 3, 00:11:47.262 "num_base_bdevs_discovered": 3, 00:11:47.262 "num_base_bdevs_operational": 3, 00:11:47.262 "base_bdevs_list": [ 00:11:47.262 { 00:11:47.262 "name": "BaseBdev1", 00:11:47.262 "uuid": "7937dc2c-dfef-56c1-a814-50e03039109e", 00:11:47.262 "is_configured": true, 00:11:47.262 "data_offset": 2048, 00:11:47.262 "data_size": 63488 00:11:47.262 }, 00:11:47.262 { 00:11:47.262 "name": "BaseBdev2", 00:11:47.262 "uuid": "c347ad69-6d42-5770-8bd8-efb749571ac3", 00:11:47.262 "is_configured": true, 00:11:47.262 "data_offset": 2048, 00:11:47.262 "data_size": 63488 00:11:47.262 }, 00:11:47.262 { 00:11:47.262 "name": "BaseBdev3", 00:11:47.262 "uuid": "67bc0899-f664-5d5e-a71e-ec807bf7c954", 00:11:47.262 "is_configured": true, 00:11:47.262 "data_offset": 2048, 00:11:47.262 "data_size": 63488 00:11:47.262 } 00:11:47.262 ] 00:11:47.262 }' 00:11:47.262 21:38:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:47.262 21:38:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.831 21:38:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:47.831 21:38:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:47.831 [2024-12-10 21:38:48.525985] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:11:48.770 21:38:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:11:48.770 21:38:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.770 21:38:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.770 21:38:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.770 21:38:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:48.770 21:38:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:11:48.770 21:38:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:11:48.770 21:38:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:11:48.770 21:38:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:48.770 21:38:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:48.770 21:38:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:48.770 21:38:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:48.770 21:38:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:48.770 21:38:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:48.770 21:38:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:48.770 21:38:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:48.771 21:38:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:48.771 21:38:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:48.771 21:38:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:48.771 21:38:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.771 21:38:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:48.771 21:38:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.771 21:38:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.771 21:38:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:48.771 "name": "raid_bdev1", 00:11:48.771 "uuid": "029cdf5d-b5a3-4e11-a515-3fde21241de1", 00:11:48.771 "strip_size_kb": 0, 00:11:48.771 "state": "online", 00:11:48.771 "raid_level": "raid1", 00:11:48.771 "superblock": true, 00:11:48.771 "num_base_bdevs": 3, 00:11:48.771 "num_base_bdevs_discovered": 3, 00:11:48.771 "num_base_bdevs_operational": 3, 00:11:48.771 "base_bdevs_list": [ 00:11:48.771 { 00:11:48.771 "name": "BaseBdev1", 00:11:48.771 "uuid": "7937dc2c-dfef-56c1-a814-50e03039109e", 00:11:48.771 "is_configured": true, 00:11:48.771 "data_offset": 2048, 00:11:48.771 "data_size": 63488 00:11:48.771 }, 00:11:48.771 { 00:11:48.771 "name": "BaseBdev2", 00:11:48.771 "uuid": "c347ad69-6d42-5770-8bd8-efb749571ac3", 00:11:48.771 "is_configured": true, 00:11:48.771 "data_offset": 2048, 00:11:48.771 "data_size": 63488 00:11:48.771 }, 00:11:48.771 { 00:11:48.771 "name": "BaseBdev3", 00:11:48.771 "uuid": "67bc0899-f664-5d5e-a71e-ec807bf7c954", 00:11:48.771 "is_configured": true, 00:11:48.771 "data_offset": 2048, 00:11:48.771 "data_size": 63488 00:11:48.771 } 00:11:48.771 ] 00:11:48.771 }' 00:11:48.771 21:38:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:48.771 21:38:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.340 21:38:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:49.340 21:38:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.340 21:38:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.340 [2024-12-10 21:38:49.912119] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:49.340 [2024-12-10 21:38:49.912225] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:49.340 [2024-12-10 21:38:49.915539] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:49.340 [2024-12-10 21:38:49.915655] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:49.340 [2024-12-10 21:38:49.915799] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:49.340 [2024-12-10 21:38:49.915859] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:11:49.340 { 00:11:49.340 "results": [ 00:11:49.340 { 00:11:49.340 "job": "raid_bdev1", 00:11:49.340 "core_mask": "0x1", 00:11:49.340 "workload": "randrw", 00:11:49.340 "percentage": 50, 00:11:49.340 "status": "finished", 00:11:49.340 "queue_depth": 1, 00:11:49.340 "io_size": 131072, 00:11:49.340 "runtime": 1.387019, 00:11:49.340 "iops": 11879.433518935213, 00:11:49.340 "mibps": 1484.9291898669017, 00:11:49.340 "io_failed": 0, 00:11:49.340 "io_timeout": 0, 00:11:49.340 "avg_latency_us": 81.01514981979645, 00:11:49.340 "min_latency_us": 26.270742358078603, 00:11:49.340 "max_latency_us": 1581.1633187772925 00:11:49.340 } 00:11:49.340 ], 00:11:49.340 "core_count": 1 00:11:49.340 } 00:11:49.340 21:38:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.340 21:38:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 69207 00:11:49.340 21:38:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 69207 ']' 00:11:49.340 21:38:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 69207 00:11:49.340 21:38:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:11:49.340 21:38:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:49.340 21:38:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69207 00:11:49.340 killing process with pid 69207 00:11:49.340 21:38:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:49.340 21:38:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:49.340 21:38:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69207' 00:11:49.340 21:38:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 69207 00:11:49.340 [2024-12-10 21:38:49.946923] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:49.340 21:38:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 69207 00:11:49.600 [2024-12-10 21:38:50.202843] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:50.977 21:38:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.MhEVTgLfzf 00:11:50.977 21:38:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:50.977 21:38:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:50.977 21:38:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:11:50.977 ************************************ 00:11:50.977 END TEST raid_read_error_test 00:11:50.977 ************************************ 00:11:50.977 21:38:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:11:50.977 21:38:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:50.977 21:38:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:11:50.977 21:38:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:11:50.977 00:11:50.978 real 0m4.707s 00:11:50.978 user 0m5.593s 00:11:50.978 sys 0m0.562s 00:11:50.978 21:38:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:50.978 21:38:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.978 21:38:51 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 3 write 00:11:50.978 21:38:51 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:50.978 21:38:51 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:50.978 21:38:51 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:50.978 ************************************ 00:11:50.978 START TEST raid_write_error_test 00:11:50.978 ************************************ 00:11:50.978 21:38:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 3 write 00:11:50.978 21:38:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:11:50.978 21:38:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:11:50.978 21:38:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:11:50.978 21:38:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:50.978 21:38:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:50.978 21:38:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:50.978 21:38:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:50.978 21:38:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:50.978 21:38:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:50.978 21:38:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:50.978 21:38:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:50.978 21:38:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:50.978 21:38:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:50.978 21:38:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:50.978 21:38:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:11:50.978 21:38:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:50.978 21:38:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:50.978 21:38:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:50.978 21:38:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:50.978 21:38:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:50.978 21:38:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:50.978 21:38:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:11:50.978 21:38:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:11:50.978 21:38:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:50.978 21:38:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.LMYUXd1vKL 00:11:50.978 21:38:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:50.978 21:38:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=69353 00:11:50.978 21:38:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 69353 00:11:50.978 21:38:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 69353 ']' 00:11:50.978 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:50.978 21:38:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:50.978 21:38:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:50.978 21:38:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:50.978 21:38:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:50.978 21:38:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.978 [2024-12-10 21:38:51.665899] Starting SPDK v25.01-pre git sha1 cec5ba284 / DPDK 24.03.0 initialization... 00:11:50.978 [2024-12-10 21:38:51.666057] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69353 ] 00:11:51.237 [2024-12-10 21:38:51.849305] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:51.237 [2024-12-10 21:38:51.978973] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:11:51.499 [2024-12-10 21:38:52.197562] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:51.499 [2024-12-10 21:38:52.197616] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:51.764 21:38:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:51.764 21:38:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:11:51.764 21:38:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:51.764 21:38:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:51.764 21:38:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.764 21:38:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.024 BaseBdev1_malloc 00:11:52.024 21:38:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.024 21:38:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:52.024 21:38:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.024 21:38:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.024 true 00:11:52.024 21:38:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.024 21:38:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:52.024 21:38:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.024 21:38:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.024 [2024-12-10 21:38:52.598829] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:52.024 [2024-12-10 21:38:52.598889] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:52.024 [2024-12-10 21:38:52.598914] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:52.024 [2024-12-10 21:38:52.598926] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:52.024 [2024-12-10 21:38:52.601279] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:52.024 [2024-12-10 21:38:52.601368] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:52.024 BaseBdev1 00:11:52.024 21:38:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.024 21:38:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:52.024 21:38:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:52.024 21:38:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.024 21:38:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.024 BaseBdev2_malloc 00:11:52.024 21:38:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.024 21:38:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:52.024 21:38:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.024 21:38:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.024 true 00:11:52.024 21:38:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.024 21:38:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:52.024 21:38:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.024 21:38:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.024 [2024-12-10 21:38:52.662929] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:52.024 [2024-12-10 21:38:52.662990] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:52.024 [2024-12-10 21:38:52.663009] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:52.024 [2024-12-10 21:38:52.663021] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:52.024 [2024-12-10 21:38:52.665320] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:52.024 [2024-12-10 21:38:52.665361] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:52.024 BaseBdev2 00:11:52.024 21:38:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.024 21:38:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:52.024 21:38:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:52.024 21:38:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.024 21:38:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.024 BaseBdev3_malloc 00:11:52.024 21:38:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.024 21:38:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:52.024 21:38:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.024 21:38:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.024 true 00:11:52.024 21:38:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.024 21:38:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:52.024 21:38:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.024 21:38:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.024 [2024-12-10 21:38:52.736000] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:52.024 [2024-12-10 21:38:52.736058] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:52.024 [2024-12-10 21:38:52.736077] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:52.024 [2024-12-10 21:38:52.736089] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:52.024 [2024-12-10 21:38:52.738435] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:52.024 [2024-12-10 21:38:52.738472] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:52.024 BaseBdev3 00:11:52.024 21:38:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.024 21:38:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:11:52.024 21:38:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.024 21:38:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.024 [2024-12-10 21:38:52.744066] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:52.024 [2024-12-10 21:38:52.746087] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:52.024 [2024-12-10 21:38:52.746255] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:52.024 [2024-12-10 21:38:52.746562] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:52.024 [2024-12-10 21:38:52.746583] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:52.024 [2024-12-10 21:38:52.746895] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:11:52.024 [2024-12-10 21:38:52.747086] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:52.024 [2024-12-10 21:38:52.747098] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:11:52.024 [2024-12-10 21:38:52.747261] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:52.024 21:38:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.024 21:38:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:52.024 21:38:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:52.024 21:38:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:52.024 21:38:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:52.024 21:38:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:52.024 21:38:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:52.024 21:38:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:52.024 21:38:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:52.024 21:38:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:52.024 21:38:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:52.025 21:38:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:52.025 21:38:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:52.025 21:38:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.025 21:38:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.025 21:38:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.025 21:38:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:52.025 "name": "raid_bdev1", 00:11:52.025 "uuid": "907ad11e-d2ad-4830-9229-de6df9b94ef6", 00:11:52.025 "strip_size_kb": 0, 00:11:52.025 "state": "online", 00:11:52.025 "raid_level": "raid1", 00:11:52.025 "superblock": true, 00:11:52.025 "num_base_bdevs": 3, 00:11:52.025 "num_base_bdevs_discovered": 3, 00:11:52.025 "num_base_bdevs_operational": 3, 00:11:52.025 "base_bdevs_list": [ 00:11:52.025 { 00:11:52.025 "name": "BaseBdev1", 00:11:52.025 "uuid": "ffa827b0-e73c-53b4-a784-d281dd175c1e", 00:11:52.025 "is_configured": true, 00:11:52.025 "data_offset": 2048, 00:11:52.025 "data_size": 63488 00:11:52.025 }, 00:11:52.025 { 00:11:52.025 "name": "BaseBdev2", 00:11:52.025 "uuid": "226238cb-8136-5898-b7fb-c6ef3520a1da", 00:11:52.025 "is_configured": true, 00:11:52.025 "data_offset": 2048, 00:11:52.025 "data_size": 63488 00:11:52.025 }, 00:11:52.025 { 00:11:52.025 "name": "BaseBdev3", 00:11:52.025 "uuid": "baba9cc4-496c-56ea-85f1-d8728ab57082", 00:11:52.025 "is_configured": true, 00:11:52.025 "data_offset": 2048, 00:11:52.025 "data_size": 63488 00:11:52.025 } 00:11:52.025 ] 00:11:52.025 }' 00:11:52.025 21:38:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:52.025 21:38:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.593 21:38:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:52.593 21:38:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:52.593 [2024-12-10 21:38:53.329046] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:11:53.532 21:38:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:11:53.532 21:38:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.532 21:38:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.532 [2024-12-10 21:38:54.232884] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:11:53.532 [2024-12-10 21:38:54.233033] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:53.532 [2024-12-10 21:38:54.233327] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006700 00:11:53.532 21:38:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.532 21:38:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:53.532 21:38:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:11:53.532 21:38:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:11:53.532 21:38:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:11:53.532 21:38:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:53.532 21:38:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:53.532 21:38:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:53.532 21:38:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:53.532 21:38:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:53.532 21:38:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:53.532 21:38:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:53.532 21:38:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:53.532 21:38:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:53.532 21:38:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:53.532 21:38:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:53.532 21:38:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.532 21:38:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.532 21:38:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:53.532 21:38:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.532 21:38:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:53.532 "name": "raid_bdev1", 00:11:53.532 "uuid": "907ad11e-d2ad-4830-9229-de6df9b94ef6", 00:11:53.532 "strip_size_kb": 0, 00:11:53.532 "state": "online", 00:11:53.532 "raid_level": "raid1", 00:11:53.532 "superblock": true, 00:11:53.532 "num_base_bdevs": 3, 00:11:53.532 "num_base_bdevs_discovered": 2, 00:11:53.532 "num_base_bdevs_operational": 2, 00:11:53.532 "base_bdevs_list": [ 00:11:53.532 { 00:11:53.532 "name": null, 00:11:53.532 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:53.532 "is_configured": false, 00:11:53.532 "data_offset": 0, 00:11:53.532 "data_size": 63488 00:11:53.532 }, 00:11:53.532 { 00:11:53.532 "name": "BaseBdev2", 00:11:53.532 "uuid": "226238cb-8136-5898-b7fb-c6ef3520a1da", 00:11:53.532 "is_configured": true, 00:11:53.532 "data_offset": 2048, 00:11:53.532 "data_size": 63488 00:11:53.532 }, 00:11:53.532 { 00:11:53.532 "name": "BaseBdev3", 00:11:53.532 "uuid": "baba9cc4-496c-56ea-85f1-d8728ab57082", 00:11:53.532 "is_configured": true, 00:11:53.532 "data_offset": 2048, 00:11:53.532 "data_size": 63488 00:11:53.532 } 00:11:53.532 ] 00:11:53.532 }' 00:11:53.532 21:38:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:53.532 21:38:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.099 21:38:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:54.099 21:38:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.099 21:38:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.099 [2024-12-10 21:38:54.675725] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:54.099 [2024-12-10 21:38:54.675762] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:54.099 [2024-12-10 21:38:54.678774] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:54.099 [2024-12-10 21:38:54.678880] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:54.099 [2024-12-10 21:38:54.678980] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:54.099 [2024-12-10 21:38:54.679039] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:11:54.099 { 00:11:54.099 "results": [ 00:11:54.099 { 00:11:54.099 "job": "raid_bdev1", 00:11:54.099 "core_mask": "0x1", 00:11:54.099 "workload": "randrw", 00:11:54.099 "percentage": 50, 00:11:54.099 "status": "finished", 00:11:54.099 "queue_depth": 1, 00:11:54.099 "io_size": 131072, 00:11:54.099 "runtime": 1.347318, 00:11:54.099 "iops": 13276.746840760681, 00:11:54.099 "mibps": 1659.5933550950851, 00:11:54.099 "io_failed": 0, 00:11:54.099 "io_timeout": 0, 00:11:54.099 "avg_latency_us": 72.2685269234675, 00:11:54.099 "min_latency_us": 25.6, 00:11:54.099 "max_latency_us": 1566.8541484716156 00:11:54.099 } 00:11:54.099 ], 00:11:54.099 "core_count": 1 00:11:54.099 } 00:11:54.099 21:38:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.099 21:38:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 69353 00:11:54.099 21:38:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 69353 ']' 00:11:54.099 21:38:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 69353 00:11:54.099 21:38:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:11:54.099 21:38:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:54.099 21:38:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69353 00:11:54.099 21:38:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:54.099 21:38:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:54.099 21:38:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69353' 00:11:54.099 killing process with pid 69353 00:11:54.099 21:38:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 69353 00:11:54.099 [2024-12-10 21:38:54.725744] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:54.099 21:38:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 69353 00:11:54.359 [2024-12-10 21:38:54.995427] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:55.734 21:38:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.LMYUXd1vKL 00:11:55.734 21:38:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:55.734 21:38:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:55.734 21:38:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:11:55.734 21:38:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:11:55.734 21:38:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:55.734 21:38:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:11:55.734 21:38:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:11:55.734 00:11:55.734 real 0m4.774s 00:11:55.734 user 0m5.678s 00:11:55.734 sys 0m0.592s 00:11:55.734 21:38:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:55.734 21:38:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.734 ************************************ 00:11:55.734 END TEST raid_write_error_test 00:11:55.734 ************************************ 00:11:55.734 21:38:56 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:11:55.734 21:38:56 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:11:55.734 21:38:56 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 4 false 00:11:55.734 21:38:56 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:55.734 21:38:56 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:55.734 21:38:56 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:55.734 ************************************ 00:11:55.734 START TEST raid_state_function_test 00:11:55.734 ************************************ 00:11:55.734 21:38:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 4 false 00:11:55.734 21:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:11:55.734 21:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:11:55.734 21:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:11:55.734 21:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:55.734 21:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:55.734 21:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:55.734 21:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:55.734 21:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:55.734 21:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:55.734 21:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:55.734 21:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:55.734 21:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:55.734 21:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:55.734 21:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:55.734 21:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:55.734 21:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:11:55.734 21:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:55.734 21:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:55.734 21:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:55.734 21:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:55.734 21:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:55.734 21:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:55.734 21:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:55.734 21:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:55.734 21:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:11:55.734 21:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:11:55.734 21:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:11:55.734 21:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:11:55.734 21:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:11:55.734 21:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:55.734 21:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=69497 00:11:55.734 21:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 69497' 00:11:55.734 Process raid pid: 69497 00:11:55.734 21:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 69497 00:11:55.734 21:38:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 69497 ']' 00:11:55.734 21:38:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:55.734 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:55.734 21:38:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:55.734 21:38:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:55.734 21:38:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:55.734 21:38:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.993 [2024-12-10 21:38:56.521728] Starting SPDK v25.01-pre git sha1 cec5ba284 / DPDK 24.03.0 initialization... 00:11:55.993 [2024-12-10 21:38:56.521989] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:55.993 [2024-12-10 21:38:56.696535] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:56.252 [2024-12-10 21:38:56.824234] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:11:56.511 [2024-12-10 21:38:57.044155] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:56.511 [2024-12-10 21:38:57.044296] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:56.771 21:38:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:56.771 21:38:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:11:56.771 21:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:56.771 21:38:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.771 21:38:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.771 [2024-12-10 21:38:57.375024] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:56.771 [2024-12-10 21:38:57.375081] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:56.771 [2024-12-10 21:38:57.375097] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:56.771 [2024-12-10 21:38:57.375107] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:56.771 [2024-12-10 21:38:57.375114] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:56.771 [2024-12-10 21:38:57.375124] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:56.771 [2024-12-10 21:38:57.375130] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:56.771 [2024-12-10 21:38:57.375138] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:56.771 21:38:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.771 21:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:56.771 21:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:56.771 21:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:56.771 21:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:56.771 21:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:56.771 21:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:56.771 21:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:56.771 21:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:56.771 21:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:56.771 21:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:56.771 21:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:56.771 21:38:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.771 21:38:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.771 21:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:56.771 21:38:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.771 21:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:56.771 "name": "Existed_Raid", 00:11:56.771 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:56.771 "strip_size_kb": 64, 00:11:56.771 "state": "configuring", 00:11:56.771 "raid_level": "raid0", 00:11:56.771 "superblock": false, 00:11:56.771 "num_base_bdevs": 4, 00:11:56.771 "num_base_bdevs_discovered": 0, 00:11:56.771 "num_base_bdevs_operational": 4, 00:11:56.771 "base_bdevs_list": [ 00:11:56.771 { 00:11:56.771 "name": "BaseBdev1", 00:11:56.771 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:56.771 "is_configured": false, 00:11:56.771 "data_offset": 0, 00:11:56.771 "data_size": 0 00:11:56.771 }, 00:11:56.771 { 00:11:56.771 "name": "BaseBdev2", 00:11:56.771 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:56.771 "is_configured": false, 00:11:56.771 "data_offset": 0, 00:11:56.771 "data_size": 0 00:11:56.771 }, 00:11:56.771 { 00:11:56.771 "name": "BaseBdev3", 00:11:56.771 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:56.771 "is_configured": false, 00:11:56.771 "data_offset": 0, 00:11:56.771 "data_size": 0 00:11:56.771 }, 00:11:56.771 { 00:11:56.771 "name": "BaseBdev4", 00:11:56.771 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:56.771 "is_configured": false, 00:11:56.771 "data_offset": 0, 00:11:56.771 "data_size": 0 00:11:56.771 } 00:11:56.771 ] 00:11:56.771 }' 00:11:56.771 21:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:56.771 21:38:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.340 21:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:57.340 21:38:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.340 21:38:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.340 [2024-12-10 21:38:57.878160] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:57.340 [2024-12-10 21:38:57.878282] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:57.340 21:38:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.340 21:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:57.340 21:38:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.340 21:38:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.340 [2024-12-10 21:38:57.890130] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:57.340 [2024-12-10 21:38:57.890222] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:57.340 [2024-12-10 21:38:57.890255] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:57.340 [2024-12-10 21:38:57.890282] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:57.340 [2024-12-10 21:38:57.890303] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:57.340 [2024-12-10 21:38:57.890326] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:57.340 [2024-12-10 21:38:57.890398] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:57.340 [2024-12-10 21:38:57.890491] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:57.340 21:38:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.340 21:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:57.340 21:38:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.340 21:38:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.340 [2024-12-10 21:38:57.940887] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:57.340 BaseBdev1 00:11:57.340 21:38:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.340 21:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:57.340 21:38:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:57.340 21:38:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:57.340 21:38:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:57.340 21:38:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:57.340 21:38:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:57.340 21:38:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:57.340 21:38:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.340 21:38:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.340 21:38:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.340 21:38:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:57.340 21:38:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.340 21:38:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.340 [ 00:11:57.340 { 00:11:57.340 "name": "BaseBdev1", 00:11:57.340 "aliases": [ 00:11:57.340 "0122fb3c-6c73-4721-b084-57c4c7b8ecb7" 00:11:57.340 ], 00:11:57.340 "product_name": "Malloc disk", 00:11:57.340 "block_size": 512, 00:11:57.340 "num_blocks": 65536, 00:11:57.340 "uuid": "0122fb3c-6c73-4721-b084-57c4c7b8ecb7", 00:11:57.340 "assigned_rate_limits": { 00:11:57.340 "rw_ios_per_sec": 0, 00:11:57.340 "rw_mbytes_per_sec": 0, 00:11:57.340 "r_mbytes_per_sec": 0, 00:11:57.340 "w_mbytes_per_sec": 0 00:11:57.340 }, 00:11:57.340 "claimed": true, 00:11:57.340 "claim_type": "exclusive_write", 00:11:57.340 "zoned": false, 00:11:57.340 "supported_io_types": { 00:11:57.340 "read": true, 00:11:57.340 "write": true, 00:11:57.340 "unmap": true, 00:11:57.340 "flush": true, 00:11:57.340 "reset": true, 00:11:57.340 "nvme_admin": false, 00:11:57.340 "nvme_io": false, 00:11:57.340 "nvme_io_md": false, 00:11:57.340 "write_zeroes": true, 00:11:57.340 "zcopy": true, 00:11:57.340 "get_zone_info": false, 00:11:57.340 "zone_management": false, 00:11:57.340 "zone_append": false, 00:11:57.340 "compare": false, 00:11:57.340 "compare_and_write": false, 00:11:57.340 "abort": true, 00:11:57.340 "seek_hole": false, 00:11:57.340 "seek_data": false, 00:11:57.340 "copy": true, 00:11:57.340 "nvme_iov_md": false 00:11:57.340 }, 00:11:57.340 "memory_domains": [ 00:11:57.340 { 00:11:57.340 "dma_device_id": "system", 00:11:57.340 "dma_device_type": 1 00:11:57.340 }, 00:11:57.340 { 00:11:57.340 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:57.340 "dma_device_type": 2 00:11:57.341 } 00:11:57.341 ], 00:11:57.341 "driver_specific": {} 00:11:57.341 } 00:11:57.341 ] 00:11:57.341 21:38:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.341 21:38:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:57.341 21:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:57.341 21:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:57.341 21:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:57.341 21:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:57.341 21:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:57.341 21:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:57.341 21:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:57.341 21:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:57.341 21:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:57.341 21:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:57.341 21:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:57.341 21:38:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.341 21:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:57.341 21:38:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.341 21:38:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.341 21:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:57.341 "name": "Existed_Raid", 00:11:57.341 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:57.341 "strip_size_kb": 64, 00:11:57.341 "state": "configuring", 00:11:57.341 "raid_level": "raid0", 00:11:57.341 "superblock": false, 00:11:57.341 "num_base_bdevs": 4, 00:11:57.341 "num_base_bdevs_discovered": 1, 00:11:57.341 "num_base_bdevs_operational": 4, 00:11:57.341 "base_bdevs_list": [ 00:11:57.341 { 00:11:57.341 "name": "BaseBdev1", 00:11:57.341 "uuid": "0122fb3c-6c73-4721-b084-57c4c7b8ecb7", 00:11:57.341 "is_configured": true, 00:11:57.341 "data_offset": 0, 00:11:57.341 "data_size": 65536 00:11:57.341 }, 00:11:57.341 { 00:11:57.341 "name": "BaseBdev2", 00:11:57.341 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:57.341 "is_configured": false, 00:11:57.341 "data_offset": 0, 00:11:57.341 "data_size": 0 00:11:57.341 }, 00:11:57.341 { 00:11:57.341 "name": "BaseBdev3", 00:11:57.341 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:57.341 "is_configured": false, 00:11:57.341 "data_offset": 0, 00:11:57.341 "data_size": 0 00:11:57.341 }, 00:11:57.341 { 00:11:57.341 "name": "BaseBdev4", 00:11:57.341 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:57.341 "is_configured": false, 00:11:57.341 "data_offset": 0, 00:11:57.341 "data_size": 0 00:11:57.341 } 00:11:57.341 ] 00:11:57.341 }' 00:11:57.341 21:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:57.341 21:38:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.909 21:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:57.909 21:38:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.909 21:38:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.909 [2024-12-10 21:38:58.456064] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:57.909 [2024-12-10 21:38:58.456124] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:57.909 21:38:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.909 21:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:57.909 21:38:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.909 21:38:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.909 [2024-12-10 21:38:58.468104] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:57.909 [2024-12-10 21:38:58.470116] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:57.909 [2024-12-10 21:38:58.470164] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:57.909 [2024-12-10 21:38:58.470176] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:57.909 [2024-12-10 21:38:58.470188] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:57.909 [2024-12-10 21:38:58.470196] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:57.909 [2024-12-10 21:38:58.470206] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:57.909 21:38:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.909 21:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:57.909 21:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:57.909 21:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:57.909 21:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:57.909 21:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:57.909 21:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:57.909 21:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:57.909 21:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:57.909 21:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:57.909 21:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:57.909 21:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:57.909 21:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:57.909 21:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:57.909 21:38:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.909 21:38:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.909 21:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:57.909 21:38:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.909 21:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:57.909 "name": "Existed_Raid", 00:11:57.909 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:57.909 "strip_size_kb": 64, 00:11:57.909 "state": "configuring", 00:11:57.909 "raid_level": "raid0", 00:11:57.909 "superblock": false, 00:11:57.909 "num_base_bdevs": 4, 00:11:57.909 "num_base_bdevs_discovered": 1, 00:11:57.909 "num_base_bdevs_operational": 4, 00:11:57.910 "base_bdevs_list": [ 00:11:57.910 { 00:11:57.910 "name": "BaseBdev1", 00:11:57.910 "uuid": "0122fb3c-6c73-4721-b084-57c4c7b8ecb7", 00:11:57.910 "is_configured": true, 00:11:57.910 "data_offset": 0, 00:11:57.910 "data_size": 65536 00:11:57.910 }, 00:11:57.910 { 00:11:57.910 "name": "BaseBdev2", 00:11:57.910 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:57.910 "is_configured": false, 00:11:57.910 "data_offset": 0, 00:11:57.910 "data_size": 0 00:11:57.910 }, 00:11:57.910 { 00:11:57.910 "name": "BaseBdev3", 00:11:57.910 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:57.910 "is_configured": false, 00:11:57.910 "data_offset": 0, 00:11:57.910 "data_size": 0 00:11:57.910 }, 00:11:57.910 { 00:11:57.910 "name": "BaseBdev4", 00:11:57.910 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:57.910 "is_configured": false, 00:11:57.910 "data_offset": 0, 00:11:57.910 "data_size": 0 00:11:57.910 } 00:11:57.910 ] 00:11:57.910 }' 00:11:57.910 21:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:57.910 21:38:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.168 21:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:58.168 21:38:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.168 21:38:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.427 [2024-12-10 21:38:58.950407] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:58.427 BaseBdev2 00:11:58.427 21:38:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.427 21:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:58.427 21:38:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:58.427 21:38:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:58.427 21:38:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:58.427 21:38:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:58.427 21:38:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:58.427 21:38:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:58.427 21:38:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.427 21:38:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.427 21:38:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.427 21:38:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:58.427 21:38:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.427 21:38:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.427 [ 00:11:58.427 { 00:11:58.427 "name": "BaseBdev2", 00:11:58.427 "aliases": [ 00:11:58.427 "498f519a-38c9-41a8-bc38-584f7c8a96a7" 00:11:58.427 ], 00:11:58.427 "product_name": "Malloc disk", 00:11:58.427 "block_size": 512, 00:11:58.427 "num_blocks": 65536, 00:11:58.427 "uuid": "498f519a-38c9-41a8-bc38-584f7c8a96a7", 00:11:58.427 "assigned_rate_limits": { 00:11:58.427 "rw_ios_per_sec": 0, 00:11:58.427 "rw_mbytes_per_sec": 0, 00:11:58.427 "r_mbytes_per_sec": 0, 00:11:58.427 "w_mbytes_per_sec": 0 00:11:58.427 }, 00:11:58.427 "claimed": true, 00:11:58.427 "claim_type": "exclusive_write", 00:11:58.427 "zoned": false, 00:11:58.427 "supported_io_types": { 00:11:58.427 "read": true, 00:11:58.427 "write": true, 00:11:58.427 "unmap": true, 00:11:58.427 "flush": true, 00:11:58.427 "reset": true, 00:11:58.427 "nvme_admin": false, 00:11:58.427 "nvme_io": false, 00:11:58.427 "nvme_io_md": false, 00:11:58.427 "write_zeroes": true, 00:11:58.427 "zcopy": true, 00:11:58.427 "get_zone_info": false, 00:11:58.427 "zone_management": false, 00:11:58.427 "zone_append": false, 00:11:58.427 "compare": false, 00:11:58.427 "compare_and_write": false, 00:11:58.427 "abort": true, 00:11:58.427 "seek_hole": false, 00:11:58.427 "seek_data": false, 00:11:58.427 "copy": true, 00:11:58.427 "nvme_iov_md": false 00:11:58.427 }, 00:11:58.427 "memory_domains": [ 00:11:58.427 { 00:11:58.427 "dma_device_id": "system", 00:11:58.427 "dma_device_type": 1 00:11:58.427 }, 00:11:58.427 { 00:11:58.427 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:58.427 "dma_device_type": 2 00:11:58.427 } 00:11:58.427 ], 00:11:58.427 "driver_specific": {} 00:11:58.427 } 00:11:58.427 ] 00:11:58.427 21:38:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.427 21:38:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:58.427 21:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:58.427 21:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:58.427 21:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:58.427 21:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:58.427 21:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:58.427 21:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:58.427 21:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:58.427 21:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:58.427 21:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:58.427 21:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:58.427 21:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:58.427 21:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:58.427 21:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:58.427 21:38:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.427 21:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:58.427 21:38:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.427 21:38:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.427 21:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:58.427 "name": "Existed_Raid", 00:11:58.427 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:58.428 "strip_size_kb": 64, 00:11:58.428 "state": "configuring", 00:11:58.428 "raid_level": "raid0", 00:11:58.428 "superblock": false, 00:11:58.428 "num_base_bdevs": 4, 00:11:58.428 "num_base_bdevs_discovered": 2, 00:11:58.428 "num_base_bdevs_operational": 4, 00:11:58.428 "base_bdevs_list": [ 00:11:58.428 { 00:11:58.428 "name": "BaseBdev1", 00:11:58.428 "uuid": "0122fb3c-6c73-4721-b084-57c4c7b8ecb7", 00:11:58.428 "is_configured": true, 00:11:58.428 "data_offset": 0, 00:11:58.428 "data_size": 65536 00:11:58.428 }, 00:11:58.428 { 00:11:58.428 "name": "BaseBdev2", 00:11:58.428 "uuid": "498f519a-38c9-41a8-bc38-584f7c8a96a7", 00:11:58.428 "is_configured": true, 00:11:58.428 "data_offset": 0, 00:11:58.428 "data_size": 65536 00:11:58.428 }, 00:11:58.428 { 00:11:58.428 "name": "BaseBdev3", 00:11:58.428 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:58.428 "is_configured": false, 00:11:58.428 "data_offset": 0, 00:11:58.428 "data_size": 0 00:11:58.428 }, 00:11:58.428 { 00:11:58.428 "name": "BaseBdev4", 00:11:58.428 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:58.428 "is_configured": false, 00:11:58.428 "data_offset": 0, 00:11:58.428 "data_size": 0 00:11:58.428 } 00:11:58.428 ] 00:11:58.428 }' 00:11:58.428 21:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:58.428 21:38:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.686 21:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:58.686 21:38:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.686 21:38:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.945 [2024-12-10 21:38:59.492519] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:58.945 BaseBdev3 00:11:58.945 21:38:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.945 21:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:58.945 21:38:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:58.945 21:38:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:58.945 21:38:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:58.945 21:38:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:58.945 21:38:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:58.945 21:38:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:58.945 21:38:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.945 21:38:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.945 21:38:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.945 21:38:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:58.945 21:38:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.945 21:38:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.945 [ 00:11:58.945 { 00:11:58.945 "name": "BaseBdev3", 00:11:58.945 "aliases": [ 00:11:58.945 "53fc0f14-b8c7-4544-8460-129cd8164249" 00:11:58.945 ], 00:11:58.945 "product_name": "Malloc disk", 00:11:58.945 "block_size": 512, 00:11:58.945 "num_blocks": 65536, 00:11:58.945 "uuid": "53fc0f14-b8c7-4544-8460-129cd8164249", 00:11:58.945 "assigned_rate_limits": { 00:11:58.945 "rw_ios_per_sec": 0, 00:11:58.945 "rw_mbytes_per_sec": 0, 00:11:58.945 "r_mbytes_per_sec": 0, 00:11:58.945 "w_mbytes_per_sec": 0 00:11:58.945 }, 00:11:58.945 "claimed": true, 00:11:58.945 "claim_type": "exclusive_write", 00:11:58.945 "zoned": false, 00:11:58.945 "supported_io_types": { 00:11:58.945 "read": true, 00:11:58.945 "write": true, 00:11:58.945 "unmap": true, 00:11:58.945 "flush": true, 00:11:58.945 "reset": true, 00:11:58.945 "nvme_admin": false, 00:11:58.945 "nvme_io": false, 00:11:58.945 "nvme_io_md": false, 00:11:58.945 "write_zeroes": true, 00:11:58.945 "zcopy": true, 00:11:58.945 "get_zone_info": false, 00:11:58.945 "zone_management": false, 00:11:58.945 "zone_append": false, 00:11:58.945 "compare": false, 00:11:58.945 "compare_and_write": false, 00:11:58.945 "abort": true, 00:11:58.945 "seek_hole": false, 00:11:58.945 "seek_data": false, 00:11:58.945 "copy": true, 00:11:58.945 "nvme_iov_md": false 00:11:58.945 }, 00:11:58.945 "memory_domains": [ 00:11:58.945 { 00:11:58.945 "dma_device_id": "system", 00:11:58.945 "dma_device_type": 1 00:11:58.945 }, 00:11:58.945 { 00:11:58.945 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:58.945 "dma_device_type": 2 00:11:58.945 } 00:11:58.945 ], 00:11:58.945 "driver_specific": {} 00:11:58.945 } 00:11:58.945 ] 00:11:58.945 21:38:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.945 21:38:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:58.945 21:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:58.945 21:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:58.945 21:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:58.945 21:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:58.945 21:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:58.945 21:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:58.945 21:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:58.945 21:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:58.945 21:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:58.945 21:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:58.945 21:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:58.945 21:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:58.945 21:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:58.945 21:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:58.945 21:38:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.945 21:38:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.945 21:38:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.945 21:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:58.945 "name": "Existed_Raid", 00:11:58.945 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:58.945 "strip_size_kb": 64, 00:11:58.945 "state": "configuring", 00:11:58.945 "raid_level": "raid0", 00:11:58.945 "superblock": false, 00:11:58.945 "num_base_bdevs": 4, 00:11:58.945 "num_base_bdevs_discovered": 3, 00:11:58.945 "num_base_bdevs_operational": 4, 00:11:58.945 "base_bdevs_list": [ 00:11:58.945 { 00:11:58.945 "name": "BaseBdev1", 00:11:58.945 "uuid": "0122fb3c-6c73-4721-b084-57c4c7b8ecb7", 00:11:58.945 "is_configured": true, 00:11:58.945 "data_offset": 0, 00:11:58.945 "data_size": 65536 00:11:58.945 }, 00:11:58.945 { 00:11:58.946 "name": "BaseBdev2", 00:11:58.946 "uuid": "498f519a-38c9-41a8-bc38-584f7c8a96a7", 00:11:58.946 "is_configured": true, 00:11:58.946 "data_offset": 0, 00:11:58.946 "data_size": 65536 00:11:58.946 }, 00:11:58.946 { 00:11:58.946 "name": "BaseBdev3", 00:11:58.946 "uuid": "53fc0f14-b8c7-4544-8460-129cd8164249", 00:11:58.946 "is_configured": true, 00:11:58.946 "data_offset": 0, 00:11:58.946 "data_size": 65536 00:11:58.946 }, 00:11:58.946 { 00:11:58.946 "name": "BaseBdev4", 00:11:58.946 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:58.946 "is_configured": false, 00:11:58.946 "data_offset": 0, 00:11:58.946 "data_size": 0 00:11:58.946 } 00:11:58.946 ] 00:11:58.946 }' 00:11:58.946 21:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:58.946 21:38:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.513 21:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:59.513 21:38:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.513 21:38:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.513 [2024-12-10 21:39:00.042722] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:59.513 [2024-12-10 21:39:00.042881] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:59.513 [2024-12-10 21:39:00.042897] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:11:59.513 [2024-12-10 21:39:00.043205] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:59.513 [2024-12-10 21:39:00.043393] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:59.513 [2024-12-10 21:39:00.043408] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:59.513 [2024-12-10 21:39:00.043802] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:59.513 BaseBdev4 00:11:59.513 21:39:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.513 21:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:11:59.513 21:39:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:11:59.513 21:39:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:59.513 21:39:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:59.513 21:39:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:59.513 21:39:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:59.513 21:39:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:59.513 21:39:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.513 21:39:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.513 21:39:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.513 21:39:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:59.513 21:39:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.513 21:39:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.513 [ 00:11:59.513 { 00:11:59.513 "name": "BaseBdev4", 00:11:59.513 "aliases": [ 00:11:59.513 "02999f85-cbb1-4ed4-aec0-82843af6327e" 00:11:59.513 ], 00:11:59.513 "product_name": "Malloc disk", 00:11:59.513 "block_size": 512, 00:11:59.513 "num_blocks": 65536, 00:11:59.513 "uuid": "02999f85-cbb1-4ed4-aec0-82843af6327e", 00:11:59.513 "assigned_rate_limits": { 00:11:59.513 "rw_ios_per_sec": 0, 00:11:59.513 "rw_mbytes_per_sec": 0, 00:11:59.513 "r_mbytes_per_sec": 0, 00:11:59.513 "w_mbytes_per_sec": 0 00:11:59.513 }, 00:11:59.513 "claimed": true, 00:11:59.513 "claim_type": "exclusive_write", 00:11:59.513 "zoned": false, 00:11:59.513 "supported_io_types": { 00:11:59.513 "read": true, 00:11:59.513 "write": true, 00:11:59.513 "unmap": true, 00:11:59.513 "flush": true, 00:11:59.513 "reset": true, 00:11:59.513 "nvme_admin": false, 00:11:59.513 "nvme_io": false, 00:11:59.513 "nvme_io_md": false, 00:11:59.513 "write_zeroes": true, 00:11:59.513 "zcopy": true, 00:11:59.513 "get_zone_info": false, 00:11:59.513 "zone_management": false, 00:11:59.513 "zone_append": false, 00:11:59.513 "compare": false, 00:11:59.513 "compare_and_write": false, 00:11:59.513 "abort": true, 00:11:59.513 "seek_hole": false, 00:11:59.513 "seek_data": false, 00:11:59.513 "copy": true, 00:11:59.513 "nvme_iov_md": false 00:11:59.513 }, 00:11:59.513 "memory_domains": [ 00:11:59.513 { 00:11:59.513 "dma_device_id": "system", 00:11:59.513 "dma_device_type": 1 00:11:59.513 }, 00:11:59.513 { 00:11:59.513 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:59.513 "dma_device_type": 2 00:11:59.513 } 00:11:59.513 ], 00:11:59.513 "driver_specific": {} 00:11:59.513 } 00:11:59.513 ] 00:11:59.513 21:39:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.513 21:39:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:59.513 21:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:59.513 21:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:59.513 21:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:11:59.513 21:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:59.513 21:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:59.513 21:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:59.513 21:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:59.513 21:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:59.513 21:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:59.513 21:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:59.513 21:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:59.513 21:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:59.513 21:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:59.513 21:39:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.513 21:39:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.513 21:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:59.513 21:39:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.513 21:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:59.513 "name": "Existed_Raid", 00:11:59.513 "uuid": "153ccf99-999c-40a4-bae5-d20bc5dc3c4d", 00:11:59.513 "strip_size_kb": 64, 00:11:59.513 "state": "online", 00:11:59.513 "raid_level": "raid0", 00:11:59.513 "superblock": false, 00:11:59.513 "num_base_bdevs": 4, 00:11:59.513 "num_base_bdevs_discovered": 4, 00:11:59.513 "num_base_bdevs_operational": 4, 00:11:59.513 "base_bdevs_list": [ 00:11:59.513 { 00:11:59.513 "name": "BaseBdev1", 00:11:59.513 "uuid": "0122fb3c-6c73-4721-b084-57c4c7b8ecb7", 00:11:59.513 "is_configured": true, 00:11:59.513 "data_offset": 0, 00:11:59.513 "data_size": 65536 00:11:59.513 }, 00:11:59.513 { 00:11:59.513 "name": "BaseBdev2", 00:11:59.513 "uuid": "498f519a-38c9-41a8-bc38-584f7c8a96a7", 00:11:59.513 "is_configured": true, 00:11:59.513 "data_offset": 0, 00:11:59.513 "data_size": 65536 00:11:59.513 }, 00:11:59.513 { 00:11:59.513 "name": "BaseBdev3", 00:11:59.513 "uuid": "53fc0f14-b8c7-4544-8460-129cd8164249", 00:11:59.513 "is_configured": true, 00:11:59.513 "data_offset": 0, 00:11:59.513 "data_size": 65536 00:11:59.513 }, 00:11:59.513 { 00:11:59.513 "name": "BaseBdev4", 00:11:59.513 "uuid": "02999f85-cbb1-4ed4-aec0-82843af6327e", 00:11:59.513 "is_configured": true, 00:11:59.513 "data_offset": 0, 00:11:59.513 "data_size": 65536 00:11:59.513 } 00:11:59.513 ] 00:11:59.513 }' 00:11:59.513 21:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:59.513 21:39:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.772 21:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:59.772 21:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:59.772 21:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:59.772 21:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:59.772 21:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:59.772 21:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:59.772 21:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:59.772 21:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:59.772 21:39:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.772 21:39:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.772 [2024-12-10 21:39:00.542322] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:00.031 21:39:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.031 21:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:00.031 "name": "Existed_Raid", 00:12:00.031 "aliases": [ 00:12:00.031 "153ccf99-999c-40a4-bae5-d20bc5dc3c4d" 00:12:00.031 ], 00:12:00.031 "product_name": "Raid Volume", 00:12:00.031 "block_size": 512, 00:12:00.031 "num_blocks": 262144, 00:12:00.031 "uuid": "153ccf99-999c-40a4-bae5-d20bc5dc3c4d", 00:12:00.031 "assigned_rate_limits": { 00:12:00.031 "rw_ios_per_sec": 0, 00:12:00.031 "rw_mbytes_per_sec": 0, 00:12:00.031 "r_mbytes_per_sec": 0, 00:12:00.031 "w_mbytes_per_sec": 0 00:12:00.031 }, 00:12:00.031 "claimed": false, 00:12:00.031 "zoned": false, 00:12:00.031 "supported_io_types": { 00:12:00.031 "read": true, 00:12:00.031 "write": true, 00:12:00.031 "unmap": true, 00:12:00.031 "flush": true, 00:12:00.031 "reset": true, 00:12:00.031 "nvme_admin": false, 00:12:00.031 "nvme_io": false, 00:12:00.031 "nvme_io_md": false, 00:12:00.031 "write_zeroes": true, 00:12:00.031 "zcopy": false, 00:12:00.031 "get_zone_info": false, 00:12:00.031 "zone_management": false, 00:12:00.031 "zone_append": false, 00:12:00.031 "compare": false, 00:12:00.031 "compare_and_write": false, 00:12:00.031 "abort": false, 00:12:00.031 "seek_hole": false, 00:12:00.031 "seek_data": false, 00:12:00.031 "copy": false, 00:12:00.031 "nvme_iov_md": false 00:12:00.031 }, 00:12:00.031 "memory_domains": [ 00:12:00.031 { 00:12:00.031 "dma_device_id": "system", 00:12:00.031 "dma_device_type": 1 00:12:00.031 }, 00:12:00.031 { 00:12:00.031 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:00.031 "dma_device_type": 2 00:12:00.031 }, 00:12:00.031 { 00:12:00.031 "dma_device_id": "system", 00:12:00.031 "dma_device_type": 1 00:12:00.031 }, 00:12:00.031 { 00:12:00.031 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:00.031 "dma_device_type": 2 00:12:00.031 }, 00:12:00.031 { 00:12:00.031 "dma_device_id": "system", 00:12:00.031 "dma_device_type": 1 00:12:00.031 }, 00:12:00.031 { 00:12:00.031 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:00.031 "dma_device_type": 2 00:12:00.031 }, 00:12:00.031 { 00:12:00.031 "dma_device_id": "system", 00:12:00.031 "dma_device_type": 1 00:12:00.031 }, 00:12:00.031 { 00:12:00.031 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:00.031 "dma_device_type": 2 00:12:00.031 } 00:12:00.031 ], 00:12:00.031 "driver_specific": { 00:12:00.031 "raid": { 00:12:00.031 "uuid": "153ccf99-999c-40a4-bae5-d20bc5dc3c4d", 00:12:00.031 "strip_size_kb": 64, 00:12:00.031 "state": "online", 00:12:00.031 "raid_level": "raid0", 00:12:00.031 "superblock": false, 00:12:00.031 "num_base_bdevs": 4, 00:12:00.031 "num_base_bdevs_discovered": 4, 00:12:00.031 "num_base_bdevs_operational": 4, 00:12:00.031 "base_bdevs_list": [ 00:12:00.031 { 00:12:00.031 "name": "BaseBdev1", 00:12:00.031 "uuid": "0122fb3c-6c73-4721-b084-57c4c7b8ecb7", 00:12:00.031 "is_configured": true, 00:12:00.031 "data_offset": 0, 00:12:00.031 "data_size": 65536 00:12:00.031 }, 00:12:00.031 { 00:12:00.031 "name": "BaseBdev2", 00:12:00.031 "uuid": "498f519a-38c9-41a8-bc38-584f7c8a96a7", 00:12:00.031 "is_configured": true, 00:12:00.031 "data_offset": 0, 00:12:00.031 "data_size": 65536 00:12:00.031 }, 00:12:00.031 { 00:12:00.031 "name": "BaseBdev3", 00:12:00.031 "uuid": "53fc0f14-b8c7-4544-8460-129cd8164249", 00:12:00.031 "is_configured": true, 00:12:00.031 "data_offset": 0, 00:12:00.031 "data_size": 65536 00:12:00.031 }, 00:12:00.031 { 00:12:00.031 "name": "BaseBdev4", 00:12:00.031 "uuid": "02999f85-cbb1-4ed4-aec0-82843af6327e", 00:12:00.031 "is_configured": true, 00:12:00.031 "data_offset": 0, 00:12:00.031 "data_size": 65536 00:12:00.031 } 00:12:00.031 ] 00:12:00.031 } 00:12:00.031 } 00:12:00.031 }' 00:12:00.031 21:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:00.031 21:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:12:00.031 BaseBdev2 00:12:00.031 BaseBdev3 00:12:00.031 BaseBdev4' 00:12:00.031 21:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:00.031 21:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:00.031 21:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:00.031 21:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:00.031 21:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:12:00.031 21:39:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.031 21:39:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.031 21:39:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.031 21:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:00.031 21:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:00.031 21:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:00.031 21:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:00.031 21:39:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.031 21:39:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.031 21:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:00.031 21:39:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.031 21:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:00.031 21:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:00.031 21:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:00.031 21:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:00.031 21:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:00.031 21:39:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.031 21:39:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.031 21:39:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.031 21:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:00.031 21:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:00.031 21:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:00.031 21:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:12:00.031 21:39:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.031 21:39:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.290 21:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:00.290 21:39:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.290 21:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:00.290 21:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:00.290 21:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:00.290 21:39:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.290 21:39:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.290 [2024-12-10 21:39:00.865479] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:00.290 [2024-12-10 21:39:00.865511] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:00.290 [2024-12-10 21:39:00.865578] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:00.290 21:39:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.290 21:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:12:00.290 21:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:12:00.290 21:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:00.290 21:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:12:00.290 21:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:12:00.290 21:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:12:00.290 21:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:00.290 21:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:12:00.290 21:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:00.290 21:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:00.290 21:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:00.290 21:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:00.290 21:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:00.290 21:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:00.290 21:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:00.290 21:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:00.290 21:39:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.290 21:39:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.290 21:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:00.290 21:39:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.290 21:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:00.290 "name": "Existed_Raid", 00:12:00.290 "uuid": "153ccf99-999c-40a4-bae5-d20bc5dc3c4d", 00:12:00.290 "strip_size_kb": 64, 00:12:00.290 "state": "offline", 00:12:00.290 "raid_level": "raid0", 00:12:00.290 "superblock": false, 00:12:00.290 "num_base_bdevs": 4, 00:12:00.290 "num_base_bdevs_discovered": 3, 00:12:00.290 "num_base_bdevs_operational": 3, 00:12:00.290 "base_bdevs_list": [ 00:12:00.290 { 00:12:00.290 "name": null, 00:12:00.290 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:00.290 "is_configured": false, 00:12:00.290 "data_offset": 0, 00:12:00.290 "data_size": 65536 00:12:00.290 }, 00:12:00.290 { 00:12:00.290 "name": "BaseBdev2", 00:12:00.290 "uuid": "498f519a-38c9-41a8-bc38-584f7c8a96a7", 00:12:00.290 "is_configured": true, 00:12:00.290 "data_offset": 0, 00:12:00.290 "data_size": 65536 00:12:00.290 }, 00:12:00.290 { 00:12:00.290 "name": "BaseBdev3", 00:12:00.290 "uuid": "53fc0f14-b8c7-4544-8460-129cd8164249", 00:12:00.290 "is_configured": true, 00:12:00.290 "data_offset": 0, 00:12:00.290 "data_size": 65536 00:12:00.290 }, 00:12:00.290 { 00:12:00.290 "name": "BaseBdev4", 00:12:00.290 "uuid": "02999f85-cbb1-4ed4-aec0-82843af6327e", 00:12:00.290 "is_configured": true, 00:12:00.290 "data_offset": 0, 00:12:00.290 "data_size": 65536 00:12:00.290 } 00:12:00.290 ] 00:12:00.290 }' 00:12:00.290 21:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:00.290 21:39:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.857 21:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:12:00.857 21:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:00.857 21:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:00.857 21:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:00.857 21:39:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.857 21:39:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.857 21:39:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.857 21:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:00.857 21:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:00.857 21:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:12:00.857 21:39:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.857 21:39:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.857 [2024-12-10 21:39:01.505354] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:00.857 21:39:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.857 21:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:00.857 21:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:00.857 21:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:00.857 21:39:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.857 21:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:00.857 21:39:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.857 21:39:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.116 21:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:01.116 21:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:01.116 21:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:12:01.116 21:39:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.116 21:39:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.116 [2024-12-10 21:39:01.670490] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:01.116 21:39:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.116 21:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:01.116 21:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:01.116 21:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:01.116 21:39:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.116 21:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:01.116 21:39:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.116 21:39:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.116 21:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:01.116 21:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:01.116 21:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:12:01.116 21:39:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.116 21:39:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.116 [2024-12-10 21:39:01.835475] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:12:01.116 [2024-12-10 21:39:01.835539] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:12:01.374 21:39:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.374 21:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:01.374 21:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:01.374 21:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:01.374 21:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:12:01.374 21:39:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.374 21:39:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.374 21:39:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.374 21:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:12:01.374 21:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:12:01.374 21:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:12:01.374 21:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:12:01.374 21:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:01.374 21:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:01.374 21:39:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.374 21:39:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.374 BaseBdev2 00:12:01.374 21:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.374 21:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:12:01.374 21:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:12:01.374 21:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:01.374 21:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:01.374 21:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:01.374 21:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:01.374 21:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:01.374 21:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.374 21:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.374 21:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.374 21:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:01.374 21:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.374 21:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.374 [ 00:12:01.374 { 00:12:01.374 "name": "BaseBdev2", 00:12:01.374 "aliases": [ 00:12:01.374 "5337811a-57da-4597-9bd8-cea920a302a5" 00:12:01.374 ], 00:12:01.374 "product_name": "Malloc disk", 00:12:01.374 "block_size": 512, 00:12:01.374 "num_blocks": 65536, 00:12:01.374 "uuid": "5337811a-57da-4597-9bd8-cea920a302a5", 00:12:01.374 "assigned_rate_limits": { 00:12:01.374 "rw_ios_per_sec": 0, 00:12:01.374 "rw_mbytes_per_sec": 0, 00:12:01.374 "r_mbytes_per_sec": 0, 00:12:01.374 "w_mbytes_per_sec": 0 00:12:01.374 }, 00:12:01.374 "claimed": false, 00:12:01.374 "zoned": false, 00:12:01.374 "supported_io_types": { 00:12:01.374 "read": true, 00:12:01.374 "write": true, 00:12:01.374 "unmap": true, 00:12:01.374 "flush": true, 00:12:01.374 "reset": true, 00:12:01.374 "nvme_admin": false, 00:12:01.374 "nvme_io": false, 00:12:01.374 "nvme_io_md": false, 00:12:01.374 "write_zeroes": true, 00:12:01.374 "zcopy": true, 00:12:01.375 "get_zone_info": false, 00:12:01.375 "zone_management": false, 00:12:01.375 "zone_append": false, 00:12:01.375 "compare": false, 00:12:01.375 "compare_and_write": false, 00:12:01.375 "abort": true, 00:12:01.375 "seek_hole": false, 00:12:01.375 "seek_data": false, 00:12:01.375 "copy": true, 00:12:01.375 "nvme_iov_md": false 00:12:01.375 }, 00:12:01.375 "memory_domains": [ 00:12:01.375 { 00:12:01.375 "dma_device_id": "system", 00:12:01.375 "dma_device_type": 1 00:12:01.375 }, 00:12:01.375 { 00:12:01.375 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:01.375 "dma_device_type": 2 00:12:01.375 } 00:12:01.375 ], 00:12:01.375 "driver_specific": {} 00:12:01.375 } 00:12:01.375 ] 00:12:01.375 21:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.375 21:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:01.375 21:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:01.375 21:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:01.375 21:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:01.375 21:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.375 21:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.375 BaseBdev3 00:12:01.375 21:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.375 21:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:12:01.375 21:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:12:01.375 21:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:01.375 21:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:01.375 21:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:01.375 21:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:01.375 21:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:01.375 21:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.375 21:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.375 21:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.375 21:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:01.375 21:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.375 21:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.375 [ 00:12:01.375 { 00:12:01.375 "name": "BaseBdev3", 00:12:01.375 "aliases": [ 00:12:01.375 "c8dc570a-3425-434e-818d-1c7f9744cbea" 00:12:01.375 ], 00:12:01.375 "product_name": "Malloc disk", 00:12:01.375 "block_size": 512, 00:12:01.375 "num_blocks": 65536, 00:12:01.375 "uuid": "c8dc570a-3425-434e-818d-1c7f9744cbea", 00:12:01.375 "assigned_rate_limits": { 00:12:01.375 "rw_ios_per_sec": 0, 00:12:01.375 "rw_mbytes_per_sec": 0, 00:12:01.375 "r_mbytes_per_sec": 0, 00:12:01.375 "w_mbytes_per_sec": 0 00:12:01.375 }, 00:12:01.375 "claimed": false, 00:12:01.375 "zoned": false, 00:12:01.375 "supported_io_types": { 00:12:01.375 "read": true, 00:12:01.375 "write": true, 00:12:01.375 "unmap": true, 00:12:01.375 "flush": true, 00:12:01.375 "reset": true, 00:12:01.375 "nvme_admin": false, 00:12:01.375 "nvme_io": false, 00:12:01.375 "nvme_io_md": false, 00:12:01.375 "write_zeroes": true, 00:12:01.375 "zcopy": true, 00:12:01.375 "get_zone_info": false, 00:12:01.375 "zone_management": false, 00:12:01.375 "zone_append": false, 00:12:01.375 "compare": false, 00:12:01.375 "compare_and_write": false, 00:12:01.375 "abort": true, 00:12:01.375 "seek_hole": false, 00:12:01.375 "seek_data": false, 00:12:01.375 "copy": true, 00:12:01.375 "nvme_iov_md": false 00:12:01.375 }, 00:12:01.375 "memory_domains": [ 00:12:01.375 { 00:12:01.375 "dma_device_id": "system", 00:12:01.375 "dma_device_type": 1 00:12:01.375 }, 00:12:01.375 { 00:12:01.375 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:01.375 "dma_device_type": 2 00:12:01.375 } 00:12:01.375 ], 00:12:01.375 "driver_specific": {} 00:12:01.375 } 00:12:01.375 ] 00:12:01.375 21:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.375 21:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:01.375 21:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:01.375 21:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:01.375 21:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:12:01.375 21:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.375 21:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.633 BaseBdev4 00:12:01.633 21:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.633 21:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:12:01.633 21:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:12:01.633 21:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:01.633 21:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:01.633 21:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:01.633 21:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:01.633 21:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:01.633 21:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.633 21:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.633 21:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.633 21:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:01.633 21:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.633 21:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.633 [ 00:12:01.633 { 00:12:01.633 "name": "BaseBdev4", 00:12:01.633 "aliases": [ 00:12:01.633 "2de78005-d643-4d4a-b2d4-deb6f75cb600" 00:12:01.633 ], 00:12:01.633 "product_name": "Malloc disk", 00:12:01.633 "block_size": 512, 00:12:01.633 "num_blocks": 65536, 00:12:01.633 "uuid": "2de78005-d643-4d4a-b2d4-deb6f75cb600", 00:12:01.633 "assigned_rate_limits": { 00:12:01.633 "rw_ios_per_sec": 0, 00:12:01.633 "rw_mbytes_per_sec": 0, 00:12:01.633 "r_mbytes_per_sec": 0, 00:12:01.633 "w_mbytes_per_sec": 0 00:12:01.633 }, 00:12:01.633 "claimed": false, 00:12:01.633 "zoned": false, 00:12:01.633 "supported_io_types": { 00:12:01.633 "read": true, 00:12:01.633 "write": true, 00:12:01.633 "unmap": true, 00:12:01.633 "flush": true, 00:12:01.633 "reset": true, 00:12:01.633 "nvme_admin": false, 00:12:01.633 "nvme_io": false, 00:12:01.633 "nvme_io_md": false, 00:12:01.633 "write_zeroes": true, 00:12:01.633 "zcopy": true, 00:12:01.633 "get_zone_info": false, 00:12:01.633 "zone_management": false, 00:12:01.633 "zone_append": false, 00:12:01.633 "compare": false, 00:12:01.633 "compare_and_write": false, 00:12:01.633 "abort": true, 00:12:01.633 "seek_hole": false, 00:12:01.633 "seek_data": false, 00:12:01.633 "copy": true, 00:12:01.633 "nvme_iov_md": false 00:12:01.633 }, 00:12:01.633 "memory_domains": [ 00:12:01.633 { 00:12:01.633 "dma_device_id": "system", 00:12:01.633 "dma_device_type": 1 00:12:01.633 }, 00:12:01.633 { 00:12:01.633 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:01.633 "dma_device_type": 2 00:12:01.633 } 00:12:01.633 ], 00:12:01.633 "driver_specific": {} 00:12:01.633 } 00:12:01.633 ] 00:12:01.633 21:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.633 21:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:01.633 21:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:01.633 21:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:01.633 21:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:01.633 21:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.633 21:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.633 [2024-12-10 21:39:02.245868] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:01.633 [2024-12-10 21:39:02.245960] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:01.633 [2024-12-10 21:39:02.246004] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:01.633 [2024-12-10 21:39:02.248021] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:01.633 [2024-12-10 21:39:02.248136] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:01.633 21:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.633 21:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:01.633 21:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:01.633 21:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:01.633 21:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:01.633 21:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:01.633 21:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:01.633 21:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:01.633 21:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:01.633 21:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:01.633 21:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:01.633 21:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:01.633 21:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:01.633 21:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.633 21:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.633 21:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.633 21:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:01.633 "name": "Existed_Raid", 00:12:01.633 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:01.633 "strip_size_kb": 64, 00:12:01.633 "state": "configuring", 00:12:01.633 "raid_level": "raid0", 00:12:01.633 "superblock": false, 00:12:01.633 "num_base_bdevs": 4, 00:12:01.633 "num_base_bdevs_discovered": 3, 00:12:01.633 "num_base_bdevs_operational": 4, 00:12:01.633 "base_bdevs_list": [ 00:12:01.633 { 00:12:01.633 "name": "BaseBdev1", 00:12:01.633 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:01.633 "is_configured": false, 00:12:01.633 "data_offset": 0, 00:12:01.633 "data_size": 0 00:12:01.633 }, 00:12:01.633 { 00:12:01.633 "name": "BaseBdev2", 00:12:01.633 "uuid": "5337811a-57da-4597-9bd8-cea920a302a5", 00:12:01.633 "is_configured": true, 00:12:01.633 "data_offset": 0, 00:12:01.633 "data_size": 65536 00:12:01.633 }, 00:12:01.633 { 00:12:01.633 "name": "BaseBdev3", 00:12:01.633 "uuid": "c8dc570a-3425-434e-818d-1c7f9744cbea", 00:12:01.633 "is_configured": true, 00:12:01.633 "data_offset": 0, 00:12:01.633 "data_size": 65536 00:12:01.633 }, 00:12:01.633 { 00:12:01.633 "name": "BaseBdev4", 00:12:01.633 "uuid": "2de78005-d643-4d4a-b2d4-deb6f75cb600", 00:12:01.633 "is_configured": true, 00:12:01.633 "data_offset": 0, 00:12:01.633 "data_size": 65536 00:12:01.633 } 00:12:01.633 ] 00:12:01.633 }' 00:12:01.633 21:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:01.633 21:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.891 21:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:12:01.891 21:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.891 21:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.149 [2024-12-10 21:39:02.673204] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:02.149 21:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.149 21:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:02.149 21:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:02.149 21:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:02.149 21:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:02.149 21:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:02.149 21:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:02.149 21:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:02.149 21:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:02.149 21:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:02.149 21:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:02.149 21:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:02.149 21:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:02.149 21:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.149 21:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.149 21:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.149 21:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:02.149 "name": "Existed_Raid", 00:12:02.149 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:02.149 "strip_size_kb": 64, 00:12:02.149 "state": "configuring", 00:12:02.149 "raid_level": "raid0", 00:12:02.149 "superblock": false, 00:12:02.149 "num_base_bdevs": 4, 00:12:02.149 "num_base_bdevs_discovered": 2, 00:12:02.149 "num_base_bdevs_operational": 4, 00:12:02.149 "base_bdevs_list": [ 00:12:02.149 { 00:12:02.149 "name": "BaseBdev1", 00:12:02.149 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:02.149 "is_configured": false, 00:12:02.149 "data_offset": 0, 00:12:02.149 "data_size": 0 00:12:02.149 }, 00:12:02.149 { 00:12:02.149 "name": null, 00:12:02.149 "uuid": "5337811a-57da-4597-9bd8-cea920a302a5", 00:12:02.149 "is_configured": false, 00:12:02.149 "data_offset": 0, 00:12:02.149 "data_size": 65536 00:12:02.149 }, 00:12:02.149 { 00:12:02.149 "name": "BaseBdev3", 00:12:02.149 "uuid": "c8dc570a-3425-434e-818d-1c7f9744cbea", 00:12:02.150 "is_configured": true, 00:12:02.150 "data_offset": 0, 00:12:02.150 "data_size": 65536 00:12:02.150 }, 00:12:02.150 { 00:12:02.150 "name": "BaseBdev4", 00:12:02.150 "uuid": "2de78005-d643-4d4a-b2d4-deb6f75cb600", 00:12:02.150 "is_configured": true, 00:12:02.150 "data_offset": 0, 00:12:02.150 "data_size": 65536 00:12:02.150 } 00:12:02.150 ] 00:12:02.150 }' 00:12:02.150 21:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:02.150 21:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.407 21:39:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:02.407 21:39:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:02.407 21:39:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.407 21:39:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.408 21:39:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.666 21:39:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:12:02.666 21:39:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:02.666 21:39:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.666 21:39:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.666 [2024-12-10 21:39:03.248557] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:02.666 BaseBdev1 00:12:02.666 21:39:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.666 21:39:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:12:02.666 21:39:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:12:02.666 21:39:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:02.666 21:39:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:02.666 21:39:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:02.666 21:39:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:02.666 21:39:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:02.666 21:39:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.666 21:39:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.666 21:39:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.666 21:39:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:02.666 21:39:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.666 21:39:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.666 [ 00:12:02.666 { 00:12:02.666 "name": "BaseBdev1", 00:12:02.666 "aliases": [ 00:12:02.666 "ca299454-2d96-4038-91ba-e7cce4eb18b0" 00:12:02.666 ], 00:12:02.666 "product_name": "Malloc disk", 00:12:02.666 "block_size": 512, 00:12:02.666 "num_blocks": 65536, 00:12:02.666 "uuid": "ca299454-2d96-4038-91ba-e7cce4eb18b0", 00:12:02.666 "assigned_rate_limits": { 00:12:02.666 "rw_ios_per_sec": 0, 00:12:02.666 "rw_mbytes_per_sec": 0, 00:12:02.666 "r_mbytes_per_sec": 0, 00:12:02.666 "w_mbytes_per_sec": 0 00:12:02.666 }, 00:12:02.666 "claimed": true, 00:12:02.666 "claim_type": "exclusive_write", 00:12:02.666 "zoned": false, 00:12:02.666 "supported_io_types": { 00:12:02.666 "read": true, 00:12:02.666 "write": true, 00:12:02.666 "unmap": true, 00:12:02.666 "flush": true, 00:12:02.666 "reset": true, 00:12:02.666 "nvme_admin": false, 00:12:02.666 "nvme_io": false, 00:12:02.666 "nvme_io_md": false, 00:12:02.666 "write_zeroes": true, 00:12:02.666 "zcopy": true, 00:12:02.666 "get_zone_info": false, 00:12:02.666 "zone_management": false, 00:12:02.666 "zone_append": false, 00:12:02.666 "compare": false, 00:12:02.666 "compare_and_write": false, 00:12:02.666 "abort": true, 00:12:02.666 "seek_hole": false, 00:12:02.666 "seek_data": false, 00:12:02.666 "copy": true, 00:12:02.666 "nvme_iov_md": false 00:12:02.666 }, 00:12:02.666 "memory_domains": [ 00:12:02.666 { 00:12:02.666 "dma_device_id": "system", 00:12:02.666 "dma_device_type": 1 00:12:02.666 }, 00:12:02.666 { 00:12:02.666 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:02.666 "dma_device_type": 2 00:12:02.666 } 00:12:02.666 ], 00:12:02.666 "driver_specific": {} 00:12:02.666 } 00:12:02.666 ] 00:12:02.666 21:39:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.666 21:39:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:02.666 21:39:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:02.666 21:39:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:02.666 21:39:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:02.666 21:39:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:02.666 21:39:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:02.666 21:39:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:02.666 21:39:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:02.666 21:39:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:02.666 21:39:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:02.666 21:39:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:02.666 21:39:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:02.666 21:39:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:02.666 21:39:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.666 21:39:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.666 21:39:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.666 21:39:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:02.666 "name": "Existed_Raid", 00:12:02.666 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:02.666 "strip_size_kb": 64, 00:12:02.666 "state": "configuring", 00:12:02.666 "raid_level": "raid0", 00:12:02.666 "superblock": false, 00:12:02.666 "num_base_bdevs": 4, 00:12:02.666 "num_base_bdevs_discovered": 3, 00:12:02.666 "num_base_bdevs_operational": 4, 00:12:02.666 "base_bdevs_list": [ 00:12:02.666 { 00:12:02.666 "name": "BaseBdev1", 00:12:02.666 "uuid": "ca299454-2d96-4038-91ba-e7cce4eb18b0", 00:12:02.666 "is_configured": true, 00:12:02.666 "data_offset": 0, 00:12:02.666 "data_size": 65536 00:12:02.666 }, 00:12:02.666 { 00:12:02.666 "name": null, 00:12:02.666 "uuid": "5337811a-57da-4597-9bd8-cea920a302a5", 00:12:02.666 "is_configured": false, 00:12:02.666 "data_offset": 0, 00:12:02.666 "data_size": 65536 00:12:02.666 }, 00:12:02.666 { 00:12:02.666 "name": "BaseBdev3", 00:12:02.666 "uuid": "c8dc570a-3425-434e-818d-1c7f9744cbea", 00:12:02.666 "is_configured": true, 00:12:02.666 "data_offset": 0, 00:12:02.666 "data_size": 65536 00:12:02.666 }, 00:12:02.666 { 00:12:02.666 "name": "BaseBdev4", 00:12:02.666 "uuid": "2de78005-d643-4d4a-b2d4-deb6f75cb600", 00:12:02.666 "is_configured": true, 00:12:02.666 "data_offset": 0, 00:12:02.666 "data_size": 65536 00:12:02.666 } 00:12:02.666 ] 00:12:02.666 }' 00:12:02.666 21:39:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:02.666 21:39:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.232 21:39:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:03.232 21:39:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:03.232 21:39:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.232 21:39:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.232 21:39:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.232 21:39:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:12:03.232 21:39:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:12:03.232 21:39:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.232 21:39:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.232 [2024-12-10 21:39:03.807792] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:03.232 21:39:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.232 21:39:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:03.232 21:39:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:03.232 21:39:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:03.232 21:39:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:03.232 21:39:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:03.232 21:39:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:03.232 21:39:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:03.232 21:39:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:03.232 21:39:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:03.232 21:39:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:03.232 21:39:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:03.232 21:39:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:03.232 21:39:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.232 21:39:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.232 21:39:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.232 21:39:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:03.232 "name": "Existed_Raid", 00:12:03.232 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:03.232 "strip_size_kb": 64, 00:12:03.232 "state": "configuring", 00:12:03.232 "raid_level": "raid0", 00:12:03.232 "superblock": false, 00:12:03.232 "num_base_bdevs": 4, 00:12:03.232 "num_base_bdevs_discovered": 2, 00:12:03.232 "num_base_bdevs_operational": 4, 00:12:03.232 "base_bdevs_list": [ 00:12:03.232 { 00:12:03.232 "name": "BaseBdev1", 00:12:03.232 "uuid": "ca299454-2d96-4038-91ba-e7cce4eb18b0", 00:12:03.232 "is_configured": true, 00:12:03.232 "data_offset": 0, 00:12:03.232 "data_size": 65536 00:12:03.232 }, 00:12:03.232 { 00:12:03.232 "name": null, 00:12:03.232 "uuid": "5337811a-57da-4597-9bd8-cea920a302a5", 00:12:03.232 "is_configured": false, 00:12:03.232 "data_offset": 0, 00:12:03.232 "data_size": 65536 00:12:03.232 }, 00:12:03.232 { 00:12:03.232 "name": null, 00:12:03.232 "uuid": "c8dc570a-3425-434e-818d-1c7f9744cbea", 00:12:03.232 "is_configured": false, 00:12:03.232 "data_offset": 0, 00:12:03.232 "data_size": 65536 00:12:03.232 }, 00:12:03.232 { 00:12:03.232 "name": "BaseBdev4", 00:12:03.232 "uuid": "2de78005-d643-4d4a-b2d4-deb6f75cb600", 00:12:03.232 "is_configured": true, 00:12:03.232 "data_offset": 0, 00:12:03.232 "data_size": 65536 00:12:03.232 } 00:12:03.232 ] 00:12:03.232 }' 00:12:03.232 21:39:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:03.232 21:39:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.799 21:39:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:03.799 21:39:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.799 21:39:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.799 21:39:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:03.799 21:39:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.799 21:39:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:12:03.799 21:39:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:12:03.799 21:39:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.799 21:39:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.799 [2024-12-10 21:39:04.326913] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:03.799 21:39:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.799 21:39:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:03.799 21:39:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:03.799 21:39:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:03.799 21:39:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:03.799 21:39:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:03.799 21:39:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:03.799 21:39:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:03.799 21:39:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:03.799 21:39:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:03.799 21:39:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:03.799 21:39:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:03.799 21:39:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.799 21:39:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:03.799 21:39:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.799 21:39:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.799 21:39:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:03.799 "name": "Existed_Raid", 00:12:03.799 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:03.799 "strip_size_kb": 64, 00:12:03.799 "state": "configuring", 00:12:03.799 "raid_level": "raid0", 00:12:03.799 "superblock": false, 00:12:03.799 "num_base_bdevs": 4, 00:12:03.799 "num_base_bdevs_discovered": 3, 00:12:03.799 "num_base_bdevs_operational": 4, 00:12:03.799 "base_bdevs_list": [ 00:12:03.799 { 00:12:03.799 "name": "BaseBdev1", 00:12:03.799 "uuid": "ca299454-2d96-4038-91ba-e7cce4eb18b0", 00:12:03.799 "is_configured": true, 00:12:03.799 "data_offset": 0, 00:12:03.799 "data_size": 65536 00:12:03.799 }, 00:12:03.799 { 00:12:03.799 "name": null, 00:12:03.799 "uuid": "5337811a-57da-4597-9bd8-cea920a302a5", 00:12:03.799 "is_configured": false, 00:12:03.799 "data_offset": 0, 00:12:03.799 "data_size": 65536 00:12:03.799 }, 00:12:03.799 { 00:12:03.799 "name": "BaseBdev3", 00:12:03.799 "uuid": "c8dc570a-3425-434e-818d-1c7f9744cbea", 00:12:03.799 "is_configured": true, 00:12:03.799 "data_offset": 0, 00:12:03.799 "data_size": 65536 00:12:03.799 }, 00:12:03.799 { 00:12:03.799 "name": "BaseBdev4", 00:12:03.800 "uuid": "2de78005-d643-4d4a-b2d4-deb6f75cb600", 00:12:03.800 "is_configured": true, 00:12:03.800 "data_offset": 0, 00:12:03.800 "data_size": 65536 00:12:03.800 } 00:12:03.800 ] 00:12:03.800 }' 00:12:03.800 21:39:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:03.800 21:39:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.058 21:39:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:04.058 21:39:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:04.058 21:39:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.058 21:39:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.058 21:39:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.058 21:39:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:12:04.058 21:39:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:04.058 21:39:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.058 21:39:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.058 [2024-12-10 21:39:04.794183] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:04.316 21:39:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.316 21:39:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:04.316 21:39:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:04.316 21:39:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:04.316 21:39:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:04.316 21:39:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:04.316 21:39:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:04.316 21:39:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:04.316 21:39:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:04.316 21:39:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:04.316 21:39:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:04.316 21:39:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:04.316 21:39:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.316 21:39:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.316 21:39:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:04.316 21:39:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.316 21:39:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:04.316 "name": "Existed_Raid", 00:12:04.316 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:04.316 "strip_size_kb": 64, 00:12:04.316 "state": "configuring", 00:12:04.316 "raid_level": "raid0", 00:12:04.316 "superblock": false, 00:12:04.316 "num_base_bdevs": 4, 00:12:04.316 "num_base_bdevs_discovered": 2, 00:12:04.316 "num_base_bdevs_operational": 4, 00:12:04.316 "base_bdevs_list": [ 00:12:04.316 { 00:12:04.316 "name": null, 00:12:04.316 "uuid": "ca299454-2d96-4038-91ba-e7cce4eb18b0", 00:12:04.316 "is_configured": false, 00:12:04.316 "data_offset": 0, 00:12:04.316 "data_size": 65536 00:12:04.316 }, 00:12:04.316 { 00:12:04.316 "name": null, 00:12:04.316 "uuid": "5337811a-57da-4597-9bd8-cea920a302a5", 00:12:04.316 "is_configured": false, 00:12:04.316 "data_offset": 0, 00:12:04.316 "data_size": 65536 00:12:04.316 }, 00:12:04.316 { 00:12:04.316 "name": "BaseBdev3", 00:12:04.316 "uuid": "c8dc570a-3425-434e-818d-1c7f9744cbea", 00:12:04.316 "is_configured": true, 00:12:04.316 "data_offset": 0, 00:12:04.316 "data_size": 65536 00:12:04.316 }, 00:12:04.316 { 00:12:04.316 "name": "BaseBdev4", 00:12:04.316 "uuid": "2de78005-d643-4d4a-b2d4-deb6f75cb600", 00:12:04.316 "is_configured": true, 00:12:04.316 "data_offset": 0, 00:12:04.316 "data_size": 65536 00:12:04.316 } 00:12:04.316 ] 00:12:04.316 }' 00:12:04.316 21:39:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:04.316 21:39:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.882 21:39:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:04.882 21:39:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:04.882 21:39:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.882 21:39:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.882 21:39:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.882 21:39:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:12:04.882 21:39:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:12:04.882 21:39:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.882 21:39:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.882 [2024-12-10 21:39:05.432760] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:04.882 21:39:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.882 21:39:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:04.882 21:39:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:04.882 21:39:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:04.882 21:39:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:04.882 21:39:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:04.882 21:39:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:04.882 21:39:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:04.882 21:39:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:04.882 21:39:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:04.882 21:39:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:04.882 21:39:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:04.882 21:39:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:04.882 21:39:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.882 21:39:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.882 21:39:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.882 21:39:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:04.882 "name": "Existed_Raid", 00:12:04.882 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:04.882 "strip_size_kb": 64, 00:12:04.882 "state": "configuring", 00:12:04.882 "raid_level": "raid0", 00:12:04.882 "superblock": false, 00:12:04.882 "num_base_bdevs": 4, 00:12:04.882 "num_base_bdevs_discovered": 3, 00:12:04.882 "num_base_bdevs_operational": 4, 00:12:04.882 "base_bdevs_list": [ 00:12:04.882 { 00:12:04.882 "name": null, 00:12:04.882 "uuid": "ca299454-2d96-4038-91ba-e7cce4eb18b0", 00:12:04.882 "is_configured": false, 00:12:04.882 "data_offset": 0, 00:12:04.882 "data_size": 65536 00:12:04.882 }, 00:12:04.882 { 00:12:04.882 "name": "BaseBdev2", 00:12:04.882 "uuid": "5337811a-57da-4597-9bd8-cea920a302a5", 00:12:04.882 "is_configured": true, 00:12:04.882 "data_offset": 0, 00:12:04.882 "data_size": 65536 00:12:04.882 }, 00:12:04.882 { 00:12:04.882 "name": "BaseBdev3", 00:12:04.882 "uuid": "c8dc570a-3425-434e-818d-1c7f9744cbea", 00:12:04.882 "is_configured": true, 00:12:04.882 "data_offset": 0, 00:12:04.882 "data_size": 65536 00:12:04.882 }, 00:12:04.882 { 00:12:04.882 "name": "BaseBdev4", 00:12:04.882 "uuid": "2de78005-d643-4d4a-b2d4-deb6f75cb600", 00:12:04.882 "is_configured": true, 00:12:04.882 "data_offset": 0, 00:12:04.882 "data_size": 65536 00:12:04.882 } 00:12:04.882 ] 00:12:04.882 }' 00:12:04.882 21:39:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:04.882 21:39:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.140 21:39:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:05.140 21:39:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:05.140 21:39:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.140 21:39:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.140 21:39:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.140 21:39:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:12:05.140 21:39:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:12:05.140 21:39:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:05.140 21:39:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.140 21:39:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.398 21:39:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.398 21:39:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u ca299454-2d96-4038-91ba-e7cce4eb18b0 00:12:05.398 21:39:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.398 21:39:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.398 [2024-12-10 21:39:05.995455] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:12:05.398 [2024-12-10 21:39:05.995617] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:05.398 [2024-12-10 21:39:05.995665] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:12:05.398 [2024-12-10 21:39:05.996002] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:12:05.398 [2024-12-10 21:39:05.996215] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:05.398 [2024-12-10 21:39:05.996264] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:12:05.398 [2024-12-10 21:39:05.996614] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:05.398 NewBaseBdev 00:12:05.398 21:39:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.398 21:39:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:12:05.398 21:39:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:12:05.398 21:39:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:05.398 21:39:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:05.398 21:39:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:05.398 21:39:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:05.398 21:39:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:05.398 21:39:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.398 21:39:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.398 21:39:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.398 21:39:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:12:05.398 21:39:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.398 21:39:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.398 [ 00:12:05.398 { 00:12:05.398 "name": "NewBaseBdev", 00:12:05.398 "aliases": [ 00:12:05.398 "ca299454-2d96-4038-91ba-e7cce4eb18b0" 00:12:05.398 ], 00:12:05.398 "product_name": "Malloc disk", 00:12:05.398 "block_size": 512, 00:12:05.398 "num_blocks": 65536, 00:12:05.398 "uuid": "ca299454-2d96-4038-91ba-e7cce4eb18b0", 00:12:05.398 "assigned_rate_limits": { 00:12:05.398 "rw_ios_per_sec": 0, 00:12:05.398 "rw_mbytes_per_sec": 0, 00:12:05.398 "r_mbytes_per_sec": 0, 00:12:05.398 "w_mbytes_per_sec": 0 00:12:05.398 }, 00:12:05.398 "claimed": true, 00:12:05.398 "claim_type": "exclusive_write", 00:12:05.398 "zoned": false, 00:12:05.398 "supported_io_types": { 00:12:05.398 "read": true, 00:12:05.398 "write": true, 00:12:05.398 "unmap": true, 00:12:05.398 "flush": true, 00:12:05.398 "reset": true, 00:12:05.398 "nvme_admin": false, 00:12:05.398 "nvme_io": false, 00:12:05.398 "nvme_io_md": false, 00:12:05.398 "write_zeroes": true, 00:12:05.398 "zcopy": true, 00:12:05.398 "get_zone_info": false, 00:12:05.398 "zone_management": false, 00:12:05.398 "zone_append": false, 00:12:05.398 "compare": false, 00:12:05.398 "compare_and_write": false, 00:12:05.398 "abort": true, 00:12:05.398 "seek_hole": false, 00:12:05.398 "seek_data": false, 00:12:05.398 "copy": true, 00:12:05.398 "nvme_iov_md": false 00:12:05.398 }, 00:12:05.398 "memory_domains": [ 00:12:05.398 { 00:12:05.398 "dma_device_id": "system", 00:12:05.398 "dma_device_type": 1 00:12:05.398 }, 00:12:05.398 { 00:12:05.398 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:05.398 "dma_device_type": 2 00:12:05.398 } 00:12:05.398 ], 00:12:05.398 "driver_specific": {} 00:12:05.398 } 00:12:05.398 ] 00:12:05.398 21:39:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.398 21:39:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:05.398 21:39:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:12:05.399 21:39:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:05.399 21:39:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:05.399 21:39:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:05.399 21:39:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:05.399 21:39:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:05.399 21:39:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:05.399 21:39:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:05.399 21:39:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:05.399 21:39:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:05.399 21:39:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:05.399 21:39:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:05.399 21:39:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.399 21:39:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.399 21:39:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.399 21:39:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:05.399 "name": "Existed_Raid", 00:12:05.399 "uuid": "91e04b22-f540-4861-b5e8-42a33f0ace82", 00:12:05.399 "strip_size_kb": 64, 00:12:05.399 "state": "online", 00:12:05.399 "raid_level": "raid0", 00:12:05.399 "superblock": false, 00:12:05.399 "num_base_bdevs": 4, 00:12:05.399 "num_base_bdevs_discovered": 4, 00:12:05.399 "num_base_bdevs_operational": 4, 00:12:05.399 "base_bdevs_list": [ 00:12:05.399 { 00:12:05.399 "name": "NewBaseBdev", 00:12:05.399 "uuid": "ca299454-2d96-4038-91ba-e7cce4eb18b0", 00:12:05.399 "is_configured": true, 00:12:05.399 "data_offset": 0, 00:12:05.399 "data_size": 65536 00:12:05.399 }, 00:12:05.399 { 00:12:05.399 "name": "BaseBdev2", 00:12:05.399 "uuid": "5337811a-57da-4597-9bd8-cea920a302a5", 00:12:05.399 "is_configured": true, 00:12:05.399 "data_offset": 0, 00:12:05.399 "data_size": 65536 00:12:05.399 }, 00:12:05.399 { 00:12:05.399 "name": "BaseBdev3", 00:12:05.399 "uuid": "c8dc570a-3425-434e-818d-1c7f9744cbea", 00:12:05.399 "is_configured": true, 00:12:05.399 "data_offset": 0, 00:12:05.399 "data_size": 65536 00:12:05.399 }, 00:12:05.399 { 00:12:05.399 "name": "BaseBdev4", 00:12:05.399 "uuid": "2de78005-d643-4d4a-b2d4-deb6f75cb600", 00:12:05.399 "is_configured": true, 00:12:05.399 "data_offset": 0, 00:12:05.399 "data_size": 65536 00:12:05.399 } 00:12:05.399 ] 00:12:05.399 }' 00:12:05.399 21:39:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:05.399 21:39:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.965 21:39:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:12:05.965 21:39:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:05.965 21:39:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:05.965 21:39:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:05.965 21:39:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:05.965 21:39:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:05.965 21:39:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:05.965 21:39:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:05.965 21:39:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.965 21:39:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.965 [2024-12-10 21:39:06.511000] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:05.965 21:39:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.965 21:39:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:05.965 "name": "Existed_Raid", 00:12:05.965 "aliases": [ 00:12:05.965 "91e04b22-f540-4861-b5e8-42a33f0ace82" 00:12:05.965 ], 00:12:05.965 "product_name": "Raid Volume", 00:12:05.965 "block_size": 512, 00:12:05.965 "num_blocks": 262144, 00:12:05.965 "uuid": "91e04b22-f540-4861-b5e8-42a33f0ace82", 00:12:05.965 "assigned_rate_limits": { 00:12:05.965 "rw_ios_per_sec": 0, 00:12:05.965 "rw_mbytes_per_sec": 0, 00:12:05.965 "r_mbytes_per_sec": 0, 00:12:05.965 "w_mbytes_per_sec": 0 00:12:05.965 }, 00:12:05.965 "claimed": false, 00:12:05.965 "zoned": false, 00:12:05.965 "supported_io_types": { 00:12:05.965 "read": true, 00:12:05.965 "write": true, 00:12:05.965 "unmap": true, 00:12:05.965 "flush": true, 00:12:05.965 "reset": true, 00:12:05.965 "nvme_admin": false, 00:12:05.965 "nvme_io": false, 00:12:05.965 "nvme_io_md": false, 00:12:05.965 "write_zeroes": true, 00:12:05.965 "zcopy": false, 00:12:05.965 "get_zone_info": false, 00:12:05.965 "zone_management": false, 00:12:05.965 "zone_append": false, 00:12:05.965 "compare": false, 00:12:05.965 "compare_and_write": false, 00:12:05.965 "abort": false, 00:12:05.965 "seek_hole": false, 00:12:05.965 "seek_data": false, 00:12:05.965 "copy": false, 00:12:05.965 "nvme_iov_md": false 00:12:05.965 }, 00:12:05.965 "memory_domains": [ 00:12:05.965 { 00:12:05.965 "dma_device_id": "system", 00:12:05.965 "dma_device_type": 1 00:12:05.965 }, 00:12:05.965 { 00:12:05.965 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:05.965 "dma_device_type": 2 00:12:05.965 }, 00:12:05.965 { 00:12:05.965 "dma_device_id": "system", 00:12:05.965 "dma_device_type": 1 00:12:05.965 }, 00:12:05.965 { 00:12:05.965 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:05.965 "dma_device_type": 2 00:12:05.965 }, 00:12:05.965 { 00:12:05.965 "dma_device_id": "system", 00:12:05.965 "dma_device_type": 1 00:12:05.965 }, 00:12:05.965 { 00:12:05.965 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:05.965 "dma_device_type": 2 00:12:05.965 }, 00:12:05.965 { 00:12:05.965 "dma_device_id": "system", 00:12:05.965 "dma_device_type": 1 00:12:05.965 }, 00:12:05.965 { 00:12:05.965 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:05.965 "dma_device_type": 2 00:12:05.965 } 00:12:05.965 ], 00:12:05.965 "driver_specific": { 00:12:05.965 "raid": { 00:12:05.965 "uuid": "91e04b22-f540-4861-b5e8-42a33f0ace82", 00:12:05.965 "strip_size_kb": 64, 00:12:05.965 "state": "online", 00:12:05.965 "raid_level": "raid0", 00:12:05.965 "superblock": false, 00:12:05.965 "num_base_bdevs": 4, 00:12:05.965 "num_base_bdevs_discovered": 4, 00:12:05.965 "num_base_bdevs_operational": 4, 00:12:05.965 "base_bdevs_list": [ 00:12:05.965 { 00:12:05.965 "name": "NewBaseBdev", 00:12:05.965 "uuid": "ca299454-2d96-4038-91ba-e7cce4eb18b0", 00:12:05.965 "is_configured": true, 00:12:05.965 "data_offset": 0, 00:12:05.965 "data_size": 65536 00:12:05.965 }, 00:12:05.965 { 00:12:05.965 "name": "BaseBdev2", 00:12:05.965 "uuid": "5337811a-57da-4597-9bd8-cea920a302a5", 00:12:05.965 "is_configured": true, 00:12:05.965 "data_offset": 0, 00:12:05.965 "data_size": 65536 00:12:05.965 }, 00:12:05.965 { 00:12:05.965 "name": "BaseBdev3", 00:12:05.965 "uuid": "c8dc570a-3425-434e-818d-1c7f9744cbea", 00:12:05.965 "is_configured": true, 00:12:05.965 "data_offset": 0, 00:12:05.965 "data_size": 65536 00:12:05.965 }, 00:12:05.965 { 00:12:05.965 "name": "BaseBdev4", 00:12:05.965 "uuid": "2de78005-d643-4d4a-b2d4-deb6f75cb600", 00:12:05.965 "is_configured": true, 00:12:05.965 "data_offset": 0, 00:12:05.965 "data_size": 65536 00:12:05.965 } 00:12:05.965 ] 00:12:05.965 } 00:12:05.965 } 00:12:05.965 }' 00:12:05.965 21:39:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:05.965 21:39:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:12:05.965 BaseBdev2 00:12:05.965 BaseBdev3 00:12:05.965 BaseBdev4' 00:12:05.965 21:39:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:05.965 21:39:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:05.965 21:39:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:05.965 21:39:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:05.965 21:39:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:12:05.965 21:39:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.965 21:39:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.965 21:39:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.965 21:39:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:05.965 21:39:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:05.965 21:39:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:05.965 21:39:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:05.965 21:39:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.965 21:39:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.965 21:39:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:05.965 21:39:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.224 21:39:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:06.224 21:39:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:06.224 21:39:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:06.224 21:39:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:06.224 21:39:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.224 21:39:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.224 21:39:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:06.224 21:39:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.224 21:39:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:06.224 21:39:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:06.224 21:39:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:06.224 21:39:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:06.224 21:39:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:12:06.224 21:39:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.224 21:39:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.224 21:39:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.224 21:39:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:06.224 21:39:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:06.224 21:39:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:06.224 21:39:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.224 21:39:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.224 [2024-12-10 21:39:06.881991] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:06.224 [2024-12-10 21:39:06.882073] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:06.224 [2024-12-10 21:39:06.882184] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:06.224 [2024-12-10 21:39:06.882286] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:06.224 [2024-12-10 21:39:06.882336] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:12:06.224 21:39:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.224 21:39:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 69497 00:12:06.224 21:39:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 69497 ']' 00:12:06.224 21:39:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 69497 00:12:06.224 21:39:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:12:06.224 21:39:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:06.224 21:39:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69497 00:12:06.224 killing process with pid 69497 00:12:06.224 21:39:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:06.224 21:39:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:06.224 21:39:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69497' 00:12:06.224 21:39:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 69497 00:12:06.224 [2024-12-10 21:39:06.919991] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:06.224 21:39:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 69497 00:12:06.790 [2024-12-10 21:39:07.378042] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:08.189 21:39:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:12:08.189 00:12:08.189 real 0m12.267s 00:12:08.189 user 0m19.431s 00:12:08.189 sys 0m2.090s 00:12:08.189 21:39:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:08.189 21:39:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.189 ************************************ 00:12:08.189 END TEST raid_state_function_test 00:12:08.189 ************************************ 00:12:08.189 21:39:08 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 4 true 00:12:08.189 21:39:08 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:08.189 21:39:08 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:08.189 21:39:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:08.189 ************************************ 00:12:08.189 START TEST raid_state_function_test_sb 00:12:08.189 ************************************ 00:12:08.189 21:39:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 4 true 00:12:08.189 21:39:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:12:08.189 21:39:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:12:08.189 21:39:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:12:08.189 21:39:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:12:08.189 21:39:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:12:08.189 21:39:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:08.189 21:39:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:12:08.189 21:39:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:08.189 21:39:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:08.189 21:39:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:12:08.189 21:39:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:08.189 21:39:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:08.189 21:39:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:12:08.189 21:39:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:08.189 21:39:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:08.189 21:39:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:12:08.189 21:39:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:08.189 21:39:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:08.189 21:39:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:08.189 21:39:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:12:08.189 21:39:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:12:08.189 21:39:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:12:08.189 21:39:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:12:08.189 21:39:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:12:08.189 21:39:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:12:08.189 21:39:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:12:08.189 21:39:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:12:08.189 21:39:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:12:08.189 21:39:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:12:08.189 21:39:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=70179 00:12:08.189 21:39:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:12:08.189 21:39:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 70179' 00:12:08.189 Process raid pid: 70179 00:12:08.189 21:39:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 70179 00:12:08.189 21:39:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 70179 ']' 00:12:08.189 21:39:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:08.189 21:39:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:08.189 21:39:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:08.189 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:08.189 21:39:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:08.189 21:39:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:08.189 [2024-12-10 21:39:08.829677] Starting SPDK v25.01-pre git sha1 cec5ba284 / DPDK 24.03.0 initialization... 00:12:08.189 [2024-12-10 21:39:08.829907] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:08.463 [2024-12-10 21:39:09.011487] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:08.463 [2024-12-10 21:39:09.132756] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:12:08.723 [2024-12-10 21:39:09.352180] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:08.723 [2024-12-10 21:39:09.352322] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:08.982 21:39:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:08.982 21:39:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:12:08.982 21:39:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:08.982 21:39:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.982 21:39:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:08.982 [2024-12-10 21:39:09.729212] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:08.982 [2024-12-10 21:39:09.729354] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:08.983 [2024-12-10 21:39:09.729392] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:08.983 [2024-12-10 21:39:09.729430] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:08.983 [2024-12-10 21:39:09.729461] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:08.983 [2024-12-10 21:39:09.729487] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:08.983 [2024-12-10 21:39:09.729509] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:08.983 [2024-12-10 21:39:09.729533] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:08.983 21:39:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.983 21:39:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:08.983 21:39:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:08.983 21:39:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:08.983 21:39:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:08.983 21:39:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:08.983 21:39:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:08.983 21:39:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:08.983 21:39:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:08.983 21:39:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:08.983 21:39:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:08.983 21:39:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:08.983 21:39:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:08.983 21:39:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.983 21:39:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:08.983 21:39:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.243 21:39:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:09.243 "name": "Existed_Raid", 00:12:09.243 "uuid": "b81d672f-2b9e-46eb-a460-5839c24220da", 00:12:09.243 "strip_size_kb": 64, 00:12:09.243 "state": "configuring", 00:12:09.243 "raid_level": "raid0", 00:12:09.243 "superblock": true, 00:12:09.243 "num_base_bdevs": 4, 00:12:09.243 "num_base_bdevs_discovered": 0, 00:12:09.243 "num_base_bdevs_operational": 4, 00:12:09.243 "base_bdevs_list": [ 00:12:09.243 { 00:12:09.243 "name": "BaseBdev1", 00:12:09.243 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:09.243 "is_configured": false, 00:12:09.243 "data_offset": 0, 00:12:09.243 "data_size": 0 00:12:09.243 }, 00:12:09.243 { 00:12:09.243 "name": "BaseBdev2", 00:12:09.243 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:09.243 "is_configured": false, 00:12:09.243 "data_offset": 0, 00:12:09.243 "data_size": 0 00:12:09.243 }, 00:12:09.243 { 00:12:09.243 "name": "BaseBdev3", 00:12:09.243 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:09.243 "is_configured": false, 00:12:09.243 "data_offset": 0, 00:12:09.243 "data_size": 0 00:12:09.243 }, 00:12:09.243 { 00:12:09.243 "name": "BaseBdev4", 00:12:09.243 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:09.243 "is_configured": false, 00:12:09.243 "data_offset": 0, 00:12:09.243 "data_size": 0 00:12:09.243 } 00:12:09.243 ] 00:12:09.243 }' 00:12:09.243 21:39:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:09.243 21:39:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:09.502 21:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:09.502 21:39:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.502 21:39:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:09.502 [2024-12-10 21:39:10.228310] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:09.502 [2024-12-10 21:39:10.228432] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:12:09.502 21:39:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.502 21:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:09.502 21:39:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.502 21:39:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:09.502 [2024-12-10 21:39:10.240287] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:09.502 [2024-12-10 21:39:10.240378] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:09.502 [2024-12-10 21:39:10.240409] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:09.502 [2024-12-10 21:39:10.240446] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:09.502 [2024-12-10 21:39:10.240467] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:09.502 [2024-12-10 21:39:10.240502] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:09.502 [2024-12-10 21:39:10.240512] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:09.502 [2024-12-10 21:39:10.240522] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:09.502 21:39:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.502 21:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:09.502 21:39:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.502 21:39:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:09.761 [2024-12-10 21:39:10.292134] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:09.761 BaseBdev1 00:12:09.761 21:39:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.761 21:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:12:09.761 21:39:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:12:09.761 21:39:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:09.761 21:39:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:09.761 21:39:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:09.761 21:39:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:09.761 21:39:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:09.761 21:39:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.761 21:39:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:09.761 21:39:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.761 21:39:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:09.761 21:39:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.761 21:39:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:09.761 [ 00:12:09.761 { 00:12:09.761 "name": "BaseBdev1", 00:12:09.761 "aliases": [ 00:12:09.761 "2808302b-25eb-4154-803b-9e4374d3c463" 00:12:09.761 ], 00:12:09.761 "product_name": "Malloc disk", 00:12:09.761 "block_size": 512, 00:12:09.761 "num_blocks": 65536, 00:12:09.761 "uuid": "2808302b-25eb-4154-803b-9e4374d3c463", 00:12:09.761 "assigned_rate_limits": { 00:12:09.761 "rw_ios_per_sec": 0, 00:12:09.761 "rw_mbytes_per_sec": 0, 00:12:09.761 "r_mbytes_per_sec": 0, 00:12:09.761 "w_mbytes_per_sec": 0 00:12:09.761 }, 00:12:09.761 "claimed": true, 00:12:09.761 "claim_type": "exclusive_write", 00:12:09.761 "zoned": false, 00:12:09.761 "supported_io_types": { 00:12:09.761 "read": true, 00:12:09.761 "write": true, 00:12:09.761 "unmap": true, 00:12:09.761 "flush": true, 00:12:09.761 "reset": true, 00:12:09.761 "nvme_admin": false, 00:12:09.761 "nvme_io": false, 00:12:09.761 "nvme_io_md": false, 00:12:09.761 "write_zeroes": true, 00:12:09.761 "zcopy": true, 00:12:09.761 "get_zone_info": false, 00:12:09.761 "zone_management": false, 00:12:09.761 "zone_append": false, 00:12:09.761 "compare": false, 00:12:09.761 "compare_and_write": false, 00:12:09.761 "abort": true, 00:12:09.761 "seek_hole": false, 00:12:09.761 "seek_data": false, 00:12:09.761 "copy": true, 00:12:09.761 "nvme_iov_md": false 00:12:09.761 }, 00:12:09.761 "memory_domains": [ 00:12:09.761 { 00:12:09.761 "dma_device_id": "system", 00:12:09.761 "dma_device_type": 1 00:12:09.761 }, 00:12:09.761 { 00:12:09.761 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:09.761 "dma_device_type": 2 00:12:09.761 } 00:12:09.761 ], 00:12:09.761 "driver_specific": {} 00:12:09.761 } 00:12:09.761 ] 00:12:09.761 21:39:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.761 21:39:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:09.761 21:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:09.761 21:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:09.761 21:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:09.761 21:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:09.761 21:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:09.761 21:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:09.761 21:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:09.762 21:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:09.762 21:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:09.762 21:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:09.762 21:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:09.762 21:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:09.762 21:39:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.762 21:39:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:09.762 21:39:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.762 21:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:09.762 "name": "Existed_Raid", 00:12:09.762 "uuid": "a7def94c-c09b-43d1-a181-ac23d0aa364f", 00:12:09.762 "strip_size_kb": 64, 00:12:09.762 "state": "configuring", 00:12:09.762 "raid_level": "raid0", 00:12:09.762 "superblock": true, 00:12:09.762 "num_base_bdevs": 4, 00:12:09.762 "num_base_bdevs_discovered": 1, 00:12:09.762 "num_base_bdevs_operational": 4, 00:12:09.762 "base_bdevs_list": [ 00:12:09.762 { 00:12:09.762 "name": "BaseBdev1", 00:12:09.762 "uuid": "2808302b-25eb-4154-803b-9e4374d3c463", 00:12:09.762 "is_configured": true, 00:12:09.762 "data_offset": 2048, 00:12:09.762 "data_size": 63488 00:12:09.762 }, 00:12:09.762 { 00:12:09.762 "name": "BaseBdev2", 00:12:09.762 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:09.762 "is_configured": false, 00:12:09.762 "data_offset": 0, 00:12:09.762 "data_size": 0 00:12:09.762 }, 00:12:09.762 { 00:12:09.762 "name": "BaseBdev3", 00:12:09.762 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:09.762 "is_configured": false, 00:12:09.762 "data_offset": 0, 00:12:09.762 "data_size": 0 00:12:09.762 }, 00:12:09.762 { 00:12:09.762 "name": "BaseBdev4", 00:12:09.762 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:09.762 "is_configured": false, 00:12:09.762 "data_offset": 0, 00:12:09.762 "data_size": 0 00:12:09.762 } 00:12:09.762 ] 00:12:09.762 }' 00:12:09.762 21:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:09.762 21:39:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:10.020 21:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:10.020 21:39:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.020 21:39:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:10.020 [2024-12-10 21:39:10.779463] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:10.020 [2024-12-10 21:39:10.779519] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:12:10.020 21:39:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.020 21:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:10.020 21:39:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.020 21:39:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:10.020 [2024-12-10 21:39:10.791503] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:10.020 [2024-12-10 21:39:10.793551] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:10.020 [2024-12-10 21:39:10.793647] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:10.020 [2024-12-10 21:39:10.793689] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:10.020 [2024-12-10 21:39:10.793729] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:10.020 [2024-12-10 21:39:10.793777] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:10.020 [2024-12-10 21:39:10.793809] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:10.020 21:39:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.020 21:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:12:10.020 21:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:10.020 21:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:10.020 21:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:10.020 21:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:10.020 21:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:10.020 21:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:10.020 21:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:10.020 21:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:10.020 21:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:10.020 21:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:10.020 21:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:10.280 21:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:10.280 21:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:10.280 21:39:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.280 21:39:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:10.280 21:39:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.280 21:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:10.280 "name": "Existed_Raid", 00:12:10.280 "uuid": "d936bd0f-1be2-42a0-9ca4-5df6dce934aa", 00:12:10.280 "strip_size_kb": 64, 00:12:10.280 "state": "configuring", 00:12:10.280 "raid_level": "raid0", 00:12:10.280 "superblock": true, 00:12:10.280 "num_base_bdevs": 4, 00:12:10.280 "num_base_bdevs_discovered": 1, 00:12:10.280 "num_base_bdevs_operational": 4, 00:12:10.280 "base_bdevs_list": [ 00:12:10.280 { 00:12:10.280 "name": "BaseBdev1", 00:12:10.280 "uuid": "2808302b-25eb-4154-803b-9e4374d3c463", 00:12:10.280 "is_configured": true, 00:12:10.280 "data_offset": 2048, 00:12:10.280 "data_size": 63488 00:12:10.280 }, 00:12:10.280 { 00:12:10.280 "name": "BaseBdev2", 00:12:10.280 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:10.280 "is_configured": false, 00:12:10.280 "data_offset": 0, 00:12:10.280 "data_size": 0 00:12:10.280 }, 00:12:10.280 { 00:12:10.280 "name": "BaseBdev3", 00:12:10.280 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:10.280 "is_configured": false, 00:12:10.280 "data_offset": 0, 00:12:10.280 "data_size": 0 00:12:10.280 }, 00:12:10.280 { 00:12:10.280 "name": "BaseBdev4", 00:12:10.280 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:10.280 "is_configured": false, 00:12:10.280 "data_offset": 0, 00:12:10.280 "data_size": 0 00:12:10.280 } 00:12:10.280 ] 00:12:10.280 }' 00:12:10.280 21:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:10.280 21:39:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:10.538 21:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:10.539 21:39:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.539 21:39:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:10.539 [2024-12-10 21:39:11.292092] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:10.539 BaseBdev2 00:12:10.539 21:39:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.539 21:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:12:10.539 21:39:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:12:10.539 21:39:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:10.539 21:39:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:10.539 21:39:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:10.539 21:39:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:10.539 21:39:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:10.539 21:39:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.539 21:39:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:10.539 21:39:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.539 21:39:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:10.539 21:39:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.539 21:39:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:10.539 [ 00:12:10.539 { 00:12:10.539 "name": "BaseBdev2", 00:12:10.539 "aliases": [ 00:12:10.539 "86f4c144-7f44-40f6-8ddd-22b14f520ed3" 00:12:10.539 ], 00:12:10.539 "product_name": "Malloc disk", 00:12:10.539 "block_size": 512, 00:12:10.539 "num_blocks": 65536, 00:12:10.539 "uuid": "86f4c144-7f44-40f6-8ddd-22b14f520ed3", 00:12:10.539 "assigned_rate_limits": { 00:12:10.539 "rw_ios_per_sec": 0, 00:12:10.798 "rw_mbytes_per_sec": 0, 00:12:10.798 "r_mbytes_per_sec": 0, 00:12:10.798 "w_mbytes_per_sec": 0 00:12:10.798 }, 00:12:10.798 "claimed": true, 00:12:10.798 "claim_type": "exclusive_write", 00:12:10.798 "zoned": false, 00:12:10.798 "supported_io_types": { 00:12:10.798 "read": true, 00:12:10.798 "write": true, 00:12:10.798 "unmap": true, 00:12:10.798 "flush": true, 00:12:10.798 "reset": true, 00:12:10.798 "nvme_admin": false, 00:12:10.798 "nvme_io": false, 00:12:10.798 "nvme_io_md": false, 00:12:10.798 "write_zeroes": true, 00:12:10.798 "zcopy": true, 00:12:10.798 "get_zone_info": false, 00:12:10.798 "zone_management": false, 00:12:10.798 "zone_append": false, 00:12:10.798 "compare": false, 00:12:10.798 "compare_and_write": false, 00:12:10.798 "abort": true, 00:12:10.798 "seek_hole": false, 00:12:10.798 "seek_data": false, 00:12:10.798 "copy": true, 00:12:10.798 "nvme_iov_md": false 00:12:10.798 }, 00:12:10.798 "memory_domains": [ 00:12:10.798 { 00:12:10.798 "dma_device_id": "system", 00:12:10.798 "dma_device_type": 1 00:12:10.798 }, 00:12:10.798 { 00:12:10.798 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:10.798 "dma_device_type": 2 00:12:10.798 } 00:12:10.798 ], 00:12:10.798 "driver_specific": {} 00:12:10.798 } 00:12:10.798 ] 00:12:10.798 21:39:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.798 21:39:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:10.798 21:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:10.798 21:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:10.798 21:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:10.798 21:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:10.798 21:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:10.798 21:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:10.798 21:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:10.798 21:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:10.798 21:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:10.798 21:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:10.798 21:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:10.798 21:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:10.799 21:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:10.799 21:39:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.799 21:39:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:10.799 21:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:10.799 21:39:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.799 21:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:10.799 "name": "Existed_Raid", 00:12:10.799 "uuid": "d936bd0f-1be2-42a0-9ca4-5df6dce934aa", 00:12:10.799 "strip_size_kb": 64, 00:12:10.799 "state": "configuring", 00:12:10.799 "raid_level": "raid0", 00:12:10.799 "superblock": true, 00:12:10.799 "num_base_bdevs": 4, 00:12:10.799 "num_base_bdevs_discovered": 2, 00:12:10.799 "num_base_bdevs_operational": 4, 00:12:10.799 "base_bdevs_list": [ 00:12:10.799 { 00:12:10.799 "name": "BaseBdev1", 00:12:10.799 "uuid": "2808302b-25eb-4154-803b-9e4374d3c463", 00:12:10.799 "is_configured": true, 00:12:10.799 "data_offset": 2048, 00:12:10.799 "data_size": 63488 00:12:10.799 }, 00:12:10.799 { 00:12:10.799 "name": "BaseBdev2", 00:12:10.799 "uuid": "86f4c144-7f44-40f6-8ddd-22b14f520ed3", 00:12:10.799 "is_configured": true, 00:12:10.799 "data_offset": 2048, 00:12:10.799 "data_size": 63488 00:12:10.799 }, 00:12:10.799 { 00:12:10.799 "name": "BaseBdev3", 00:12:10.799 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:10.799 "is_configured": false, 00:12:10.799 "data_offset": 0, 00:12:10.799 "data_size": 0 00:12:10.799 }, 00:12:10.799 { 00:12:10.799 "name": "BaseBdev4", 00:12:10.799 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:10.799 "is_configured": false, 00:12:10.799 "data_offset": 0, 00:12:10.799 "data_size": 0 00:12:10.799 } 00:12:10.799 ] 00:12:10.799 }' 00:12:10.799 21:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:10.799 21:39:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.057 21:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:11.057 21:39:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.057 21:39:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.316 [2024-12-10 21:39:11.869531] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:11.316 BaseBdev3 00:12:11.316 21:39:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.316 21:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:12:11.316 21:39:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:12:11.316 21:39:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:11.316 21:39:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:11.316 21:39:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:11.316 21:39:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:11.316 21:39:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:11.316 21:39:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.316 21:39:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.316 21:39:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.316 21:39:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:11.316 21:39:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.316 21:39:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.316 [ 00:12:11.316 { 00:12:11.316 "name": "BaseBdev3", 00:12:11.316 "aliases": [ 00:12:11.316 "a2a00960-d558-4d25-b507-ca8269c226e3" 00:12:11.316 ], 00:12:11.316 "product_name": "Malloc disk", 00:12:11.316 "block_size": 512, 00:12:11.316 "num_blocks": 65536, 00:12:11.316 "uuid": "a2a00960-d558-4d25-b507-ca8269c226e3", 00:12:11.316 "assigned_rate_limits": { 00:12:11.316 "rw_ios_per_sec": 0, 00:12:11.316 "rw_mbytes_per_sec": 0, 00:12:11.316 "r_mbytes_per_sec": 0, 00:12:11.316 "w_mbytes_per_sec": 0 00:12:11.316 }, 00:12:11.316 "claimed": true, 00:12:11.316 "claim_type": "exclusive_write", 00:12:11.316 "zoned": false, 00:12:11.316 "supported_io_types": { 00:12:11.316 "read": true, 00:12:11.316 "write": true, 00:12:11.316 "unmap": true, 00:12:11.316 "flush": true, 00:12:11.316 "reset": true, 00:12:11.316 "nvme_admin": false, 00:12:11.316 "nvme_io": false, 00:12:11.316 "nvme_io_md": false, 00:12:11.316 "write_zeroes": true, 00:12:11.316 "zcopy": true, 00:12:11.316 "get_zone_info": false, 00:12:11.316 "zone_management": false, 00:12:11.316 "zone_append": false, 00:12:11.316 "compare": false, 00:12:11.316 "compare_and_write": false, 00:12:11.316 "abort": true, 00:12:11.316 "seek_hole": false, 00:12:11.316 "seek_data": false, 00:12:11.316 "copy": true, 00:12:11.316 "nvme_iov_md": false 00:12:11.316 }, 00:12:11.316 "memory_domains": [ 00:12:11.316 { 00:12:11.316 "dma_device_id": "system", 00:12:11.316 "dma_device_type": 1 00:12:11.316 }, 00:12:11.316 { 00:12:11.316 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:11.316 "dma_device_type": 2 00:12:11.316 } 00:12:11.316 ], 00:12:11.316 "driver_specific": {} 00:12:11.316 } 00:12:11.316 ] 00:12:11.316 21:39:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.316 21:39:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:11.316 21:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:11.316 21:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:11.316 21:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:11.316 21:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:11.316 21:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:11.316 21:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:11.316 21:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:11.316 21:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:11.316 21:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:11.316 21:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:11.316 21:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:11.316 21:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:11.316 21:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:11.316 21:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:11.316 21:39:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.316 21:39:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.317 21:39:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.317 21:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:11.317 "name": "Existed_Raid", 00:12:11.317 "uuid": "d936bd0f-1be2-42a0-9ca4-5df6dce934aa", 00:12:11.317 "strip_size_kb": 64, 00:12:11.317 "state": "configuring", 00:12:11.317 "raid_level": "raid0", 00:12:11.317 "superblock": true, 00:12:11.317 "num_base_bdevs": 4, 00:12:11.317 "num_base_bdevs_discovered": 3, 00:12:11.317 "num_base_bdevs_operational": 4, 00:12:11.317 "base_bdevs_list": [ 00:12:11.317 { 00:12:11.317 "name": "BaseBdev1", 00:12:11.317 "uuid": "2808302b-25eb-4154-803b-9e4374d3c463", 00:12:11.317 "is_configured": true, 00:12:11.317 "data_offset": 2048, 00:12:11.317 "data_size": 63488 00:12:11.317 }, 00:12:11.317 { 00:12:11.317 "name": "BaseBdev2", 00:12:11.317 "uuid": "86f4c144-7f44-40f6-8ddd-22b14f520ed3", 00:12:11.317 "is_configured": true, 00:12:11.317 "data_offset": 2048, 00:12:11.317 "data_size": 63488 00:12:11.317 }, 00:12:11.317 { 00:12:11.317 "name": "BaseBdev3", 00:12:11.317 "uuid": "a2a00960-d558-4d25-b507-ca8269c226e3", 00:12:11.317 "is_configured": true, 00:12:11.317 "data_offset": 2048, 00:12:11.317 "data_size": 63488 00:12:11.317 }, 00:12:11.317 { 00:12:11.317 "name": "BaseBdev4", 00:12:11.317 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:11.317 "is_configured": false, 00:12:11.317 "data_offset": 0, 00:12:11.317 "data_size": 0 00:12:11.317 } 00:12:11.317 ] 00:12:11.317 }' 00:12:11.317 21:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:11.317 21:39:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.885 21:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:12:11.885 21:39:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.885 21:39:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.885 [2024-12-10 21:39:12.436939] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:11.885 [2024-12-10 21:39:12.437370] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:11.885 [2024-12-10 21:39:12.437394] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:12:11.885 [2024-12-10 21:39:12.437748] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:11.885 [2024-12-10 21:39:12.437930] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:11.885 [2024-12-10 21:39:12.437944] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:12:11.885 BaseBdev4 00:12:11.886 [2024-12-10 21:39:12.438124] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:11.886 21:39:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.886 21:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:12:11.886 21:39:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:12:11.886 21:39:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:11.886 21:39:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:11.886 21:39:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:11.886 21:39:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:11.886 21:39:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:11.886 21:39:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.886 21:39:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.886 21:39:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.886 21:39:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:11.886 21:39:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.886 21:39:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.886 [ 00:12:11.886 { 00:12:11.886 "name": "BaseBdev4", 00:12:11.886 "aliases": [ 00:12:11.886 "559af0ea-1d44-4c25-8f91-49cbaf624294" 00:12:11.886 ], 00:12:11.886 "product_name": "Malloc disk", 00:12:11.886 "block_size": 512, 00:12:11.886 "num_blocks": 65536, 00:12:11.886 "uuid": "559af0ea-1d44-4c25-8f91-49cbaf624294", 00:12:11.886 "assigned_rate_limits": { 00:12:11.886 "rw_ios_per_sec": 0, 00:12:11.886 "rw_mbytes_per_sec": 0, 00:12:11.886 "r_mbytes_per_sec": 0, 00:12:11.886 "w_mbytes_per_sec": 0 00:12:11.886 }, 00:12:11.886 "claimed": true, 00:12:11.886 "claim_type": "exclusive_write", 00:12:11.886 "zoned": false, 00:12:11.886 "supported_io_types": { 00:12:11.886 "read": true, 00:12:11.886 "write": true, 00:12:11.886 "unmap": true, 00:12:11.886 "flush": true, 00:12:11.886 "reset": true, 00:12:11.886 "nvme_admin": false, 00:12:11.886 "nvme_io": false, 00:12:11.886 "nvme_io_md": false, 00:12:11.886 "write_zeroes": true, 00:12:11.886 "zcopy": true, 00:12:11.886 "get_zone_info": false, 00:12:11.886 "zone_management": false, 00:12:11.886 "zone_append": false, 00:12:11.886 "compare": false, 00:12:11.886 "compare_and_write": false, 00:12:11.886 "abort": true, 00:12:11.886 "seek_hole": false, 00:12:11.886 "seek_data": false, 00:12:11.886 "copy": true, 00:12:11.886 "nvme_iov_md": false 00:12:11.886 }, 00:12:11.886 "memory_domains": [ 00:12:11.886 { 00:12:11.886 "dma_device_id": "system", 00:12:11.886 "dma_device_type": 1 00:12:11.886 }, 00:12:11.886 { 00:12:11.886 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:11.886 "dma_device_type": 2 00:12:11.886 } 00:12:11.886 ], 00:12:11.886 "driver_specific": {} 00:12:11.886 } 00:12:11.886 ] 00:12:11.886 21:39:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.886 21:39:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:11.886 21:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:11.886 21:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:11.886 21:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:12:11.886 21:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:11.886 21:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:11.886 21:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:11.886 21:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:11.886 21:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:11.886 21:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:11.886 21:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:11.886 21:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:11.886 21:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:11.886 21:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:11.886 21:39:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.886 21:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:11.886 21:39:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.886 21:39:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.886 21:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:11.886 "name": "Existed_Raid", 00:12:11.886 "uuid": "d936bd0f-1be2-42a0-9ca4-5df6dce934aa", 00:12:11.886 "strip_size_kb": 64, 00:12:11.886 "state": "online", 00:12:11.886 "raid_level": "raid0", 00:12:11.886 "superblock": true, 00:12:11.886 "num_base_bdevs": 4, 00:12:11.886 "num_base_bdevs_discovered": 4, 00:12:11.886 "num_base_bdevs_operational": 4, 00:12:11.886 "base_bdevs_list": [ 00:12:11.886 { 00:12:11.886 "name": "BaseBdev1", 00:12:11.886 "uuid": "2808302b-25eb-4154-803b-9e4374d3c463", 00:12:11.886 "is_configured": true, 00:12:11.886 "data_offset": 2048, 00:12:11.886 "data_size": 63488 00:12:11.886 }, 00:12:11.886 { 00:12:11.886 "name": "BaseBdev2", 00:12:11.886 "uuid": "86f4c144-7f44-40f6-8ddd-22b14f520ed3", 00:12:11.886 "is_configured": true, 00:12:11.886 "data_offset": 2048, 00:12:11.886 "data_size": 63488 00:12:11.886 }, 00:12:11.886 { 00:12:11.886 "name": "BaseBdev3", 00:12:11.886 "uuid": "a2a00960-d558-4d25-b507-ca8269c226e3", 00:12:11.886 "is_configured": true, 00:12:11.886 "data_offset": 2048, 00:12:11.886 "data_size": 63488 00:12:11.886 }, 00:12:11.886 { 00:12:11.886 "name": "BaseBdev4", 00:12:11.886 "uuid": "559af0ea-1d44-4c25-8f91-49cbaf624294", 00:12:11.886 "is_configured": true, 00:12:11.886 "data_offset": 2048, 00:12:11.886 "data_size": 63488 00:12:11.886 } 00:12:11.886 ] 00:12:11.886 }' 00:12:11.886 21:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:11.886 21:39:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:12.451 21:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:12:12.451 21:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:12.451 21:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:12.451 21:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:12.451 21:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:12:12.451 21:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:12.451 21:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:12.451 21:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:12.451 21:39:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.451 21:39:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:12.451 [2024-12-10 21:39:12.980518] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:12.451 21:39:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.451 21:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:12.451 "name": "Existed_Raid", 00:12:12.451 "aliases": [ 00:12:12.451 "d936bd0f-1be2-42a0-9ca4-5df6dce934aa" 00:12:12.451 ], 00:12:12.451 "product_name": "Raid Volume", 00:12:12.451 "block_size": 512, 00:12:12.451 "num_blocks": 253952, 00:12:12.451 "uuid": "d936bd0f-1be2-42a0-9ca4-5df6dce934aa", 00:12:12.451 "assigned_rate_limits": { 00:12:12.451 "rw_ios_per_sec": 0, 00:12:12.451 "rw_mbytes_per_sec": 0, 00:12:12.451 "r_mbytes_per_sec": 0, 00:12:12.451 "w_mbytes_per_sec": 0 00:12:12.451 }, 00:12:12.451 "claimed": false, 00:12:12.451 "zoned": false, 00:12:12.451 "supported_io_types": { 00:12:12.451 "read": true, 00:12:12.451 "write": true, 00:12:12.451 "unmap": true, 00:12:12.452 "flush": true, 00:12:12.452 "reset": true, 00:12:12.452 "nvme_admin": false, 00:12:12.452 "nvme_io": false, 00:12:12.452 "nvme_io_md": false, 00:12:12.452 "write_zeroes": true, 00:12:12.452 "zcopy": false, 00:12:12.452 "get_zone_info": false, 00:12:12.452 "zone_management": false, 00:12:12.452 "zone_append": false, 00:12:12.452 "compare": false, 00:12:12.452 "compare_and_write": false, 00:12:12.452 "abort": false, 00:12:12.452 "seek_hole": false, 00:12:12.452 "seek_data": false, 00:12:12.452 "copy": false, 00:12:12.452 "nvme_iov_md": false 00:12:12.452 }, 00:12:12.452 "memory_domains": [ 00:12:12.452 { 00:12:12.452 "dma_device_id": "system", 00:12:12.452 "dma_device_type": 1 00:12:12.452 }, 00:12:12.452 { 00:12:12.452 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:12.452 "dma_device_type": 2 00:12:12.452 }, 00:12:12.452 { 00:12:12.452 "dma_device_id": "system", 00:12:12.452 "dma_device_type": 1 00:12:12.452 }, 00:12:12.452 { 00:12:12.452 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:12.452 "dma_device_type": 2 00:12:12.452 }, 00:12:12.452 { 00:12:12.452 "dma_device_id": "system", 00:12:12.452 "dma_device_type": 1 00:12:12.452 }, 00:12:12.452 { 00:12:12.452 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:12.452 "dma_device_type": 2 00:12:12.452 }, 00:12:12.452 { 00:12:12.452 "dma_device_id": "system", 00:12:12.452 "dma_device_type": 1 00:12:12.452 }, 00:12:12.452 { 00:12:12.452 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:12.452 "dma_device_type": 2 00:12:12.452 } 00:12:12.452 ], 00:12:12.452 "driver_specific": { 00:12:12.452 "raid": { 00:12:12.452 "uuid": "d936bd0f-1be2-42a0-9ca4-5df6dce934aa", 00:12:12.452 "strip_size_kb": 64, 00:12:12.452 "state": "online", 00:12:12.452 "raid_level": "raid0", 00:12:12.452 "superblock": true, 00:12:12.452 "num_base_bdevs": 4, 00:12:12.452 "num_base_bdevs_discovered": 4, 00:12:12.452 "num_base_bdevs_operational": 4, 00:12:12.452 "base_bdevs_list": [ 00:12:12.452 { 00:12:12.452 "name": "BaseBdev1", 00:12:12.452 "uuid": "2808302b-25eb-4154-803b-9e4374d3c463", 00:12:12.452 "is_configured": true, 00:12:12.452 "data_offset": 2048, 00:12:12.452 "data_size": 63488 00:12:12.452 }, 00:12:12.452 { 00:12:12.452 "name": "BaseBdev2", 00:12:12.452 "uuid": "86f4c144-7f44-40f6-8ddd-22b14f520ed3", 00:12:12.452 "is_configured": true, 00:12:12.452 "data_offset": 2048, 00:12:12.452 "data_size": 63488 00:12:12.452 }, 00:12:12.452 { 00:12:12.452 "name": "BaseBdev3", 00:12:12.452 "uuid": "a2a00960-d558-4d25-b507-ca8269c226e3", 00:12:12.452 "is_configured": true, 00:12:12.452 "data_offset": 2048, 00:12:12.452 "data_size": 63488 00:12:12.452 }, 00:12:12.452 { 00:12:12.452 "name": "BaseBdev4", 00:12:12.452 "uuid": "559af0ea-1d44-4c25-8f91-49cbaf624294", 00:12:12.452 "is_configured": true, 00:12:12.452 "data_offset": 2048, 00:12:12.452 "data_size": 63488 00:12:12.452 } 00:12:12.452 ] 00:12:12.452 } 00:12:12.452 } 00:12:12.452 }' 00:12:12.452 21:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:12.452 21:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:12:12.452 BaseBdev2 00:12:12.452 BaseBdev3 00:12:12.452 BaseBdev4' 00:12:12.452 21:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:12.452 21:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:12.452 21:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:12.452 21:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:12.452 21:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:12:12.452 21:39:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.452 21:39:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:12.452 21:39:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.452 21:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:12.452 21:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:12.452 21:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:12.452 21:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:12.452 21:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:12.452 21:39:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.452 21:39:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:12.452 21:39:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.452 21:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:12.452 21:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:12.452 21:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:12.452 21:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:12.452 21:39:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.452 21:39:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:12.452 21:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:12.452 21:39:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.710 21:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:12.710 21:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:12.710 21:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:12.710 21:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:12:12.710 21:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:12.710 21:39:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.710 21:39:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:12.710 21:39:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.710 21:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:12.710 21:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:12.710 21:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:12.710 21:39:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.710 21:39:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:12.710 [2024-12-10 21:39:13.323708] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:12.710 [2024-12-10 21:39:13.323824] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:12.710 [2024-12-10 21:39:13.323913] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:12.710 21:39:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.710 21:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:12:12.710 21:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:12:12.710 21:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:12.710 21:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:12:12.710 21:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:12:12.710 21:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:12:12.710 21:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:12.710 21:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:12:12.710 21:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:12.710 21:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:12.710 21:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:12.710 21:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:12.710 21:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:12.710 21:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:12.710 21:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:12.710 21:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:12.710 21:39:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.710 21:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:12.710 21:39:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:12.710 21:39:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.968 21:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:12.968 "name": "Existed_Raid", 00:12:12.968 "uuid": "d936bd0f-1be2-42a0-9ca4-5df6dce934aa", 00:12:12.968 "strip_size_kb": 64, 00:12:12.968 "state": "offline", 00:12:12.968 "raid_level": "raid0", 00:12:12.968 "superblock": true, 00:12:12.968 "num_base_bdevs": 4, 00:12:12.968 "num_base_bdevs_discovered": 3, 00:12:12.968 "num_base_bdevs_operational": 3, 00:12:12.968 "base_bdevs_list": [ 00:12:12.968 { 00:12:12.968 "name": null, 00:12:12.968 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:12.968 "is_configured": false, 00:12:12.968 "data_offset": 0, 00:12:12.968 "data_size": 63488 00:12:12.968 }, 00:12:12.968 { 00:12:12.968 "name": "BaseBdev2", 00:12:12.968 "uuid": "86f4c144-7f44-40f6-8ddd-22b14f520ed3", 00:12:12.968 "is_configured": true, 00:12:12.968 "data_offset": 2048, 00:12:12.968 "data_size": 63488 00:12:12.968 }, 00:12:12.968 { 00:12:12.968 "name": "BaseBdev3", 00:12:12.968 "uuid": "a2a00960-d558-4d25-b507-ca8269c226e3", 00:12:12.968 "is_configured": true, 00:12:12.968 "data_offset": 2048, 00:12:12.968 "data_size": 63488 00:12:12.968 }, 00:12:12.968 { 00:12:12.968 "name": "BaseBdev4", 00:12:12.968 "uuid": "559af0ea-1d44-4c25-8f91-49cbaf624294", 00:12:12.968 "is_configured": true, 00:12:12.968 "data_offset": 2048, 00:12:12.968 "data_size": 63488 00:12:12.968 } 00:12:12.968 ] 00:12:12.968 }' 00:12:12.968 21:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:12.968 21:39:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.226 21:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:12:13.226 21:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:13.226 21:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:13.226 21:39:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.226 21:39:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.226 21:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:13.226 21:39:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.226 21:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:13.226 21:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:13.226 21:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:12:13.226 21:39:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.226 21:39:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.226 [2024-12-10 21:39:13.998244] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:13.484 21:39:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.484 21:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:13.484 21:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:13.484 21:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:13.484 21:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:13.484 21:39:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.484 21:39:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.484 21:39:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.484 21:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:13.484 21:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:13.484 21:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:12:13.484 21:39:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.484 21:39:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.484 [2024-12-10 21:39:14.168644] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:13.741 21:39:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.741 21:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:13.741 21:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:13.741 21:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:13.741 21:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:13.741 21:39:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.741 21:39:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.741 21:39:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.741 21:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:13.741 21:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:13.741 21:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:12:13.741 21:39:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.741 21:39:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.741 [2024-12-10 21:39:14.321038] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:12:13.741 [2024-12-10 21:39:14.321154] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:12:13.741 21:39:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.741 21:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:13.741 21:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:13.741 21:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:13.741 21:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:12:13.741 21:39:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.741 21:39:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.741 21:39:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.741 21:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:12:13.741 21:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:12:13.741 21:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:12:13.741 21:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:12:13.741 21:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:13.741 21:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:13.741 21:39:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.741 21:39:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.999 BaseBdev2 00:12:13.999 21:39:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.999 21:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:12:13.999 21:39:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:12:13.999 21:39:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:13.999 21:39:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:13.999 21:39:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:13.999 21:39:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:13.999 21:39:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:13.999 21:39:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.999 21:39:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.999 21:39:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.999 21:39:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:13.999 21:39:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.999 21:39:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.999 [ 00:12:13.999 { 00:12:13.999 "name": "BaseBdev2", 00:12:13.999 "aliases": [ 00:12:13.999 "b60bf325-e2d2-47ef-8dc0-64bae0331fea" 00:12:13.999 ], 00:12:13.999 "product_name": "Malloc disk", 00:12:13.999 "block_size": 512, 00:12:13.999 "num_blocks": 65536, 00:12:13.999 "uuid": "b60bf325-e2d2-47ef-8dc0-64bae0331fea", 00:12:13.999 "assigned_rate_limits": { 00:12:13.999 "rw_ios_per_sec": 0, 00:12:13.999 "rw_mbytes_per_sec": 0, 00:12:13.999 "r_mbytes_per_sec": 0, 00:12:13.999 "w_mbytes_per_sec": 0 00:12:13.999 }, 00:12:14.000 "claimed": false, 00:12:14.000 "zoned": false, 00:12:14.000 "supported_io_types": { 00:12:14.000 "read": true, 00:12:14.000 "write": true, 00:12:14.000 "unmap": true, 00:12:14.000 "flush": true, 00:12:14.000 "reset": true, 00:12:14.000 "nvme_admin": false, 00:12:14.000 "nvme_io": false, 00:12:14.000 "nvme_io_md": false, 00:12:14.000 "write_zeroes": true, 00:12:14.000 "zcopy": true, 00:12:14.000 "get_zone_info": false, 00:12:14.000 "zone_management": false, 00:12:14.000 "zone_append": false, 00:12:14.000 "compare": false, 00:12:14.000 "compare_and_write": false, 00:12:14.000 "abort": true, 00:12:14.000 "seek_hole": false, 00:12:14.000 "seek_data": false, 00:12:14.000 "copy": true, 00:12:14.000 "nvme_iov_md": false 00:12:14.000 }, 00:12:14.000 "memory_domains": [ 00:12:14.000 { 00:12:14.000 "dma_device_id": "system", 00:12:14.000 "dma_device_type": 1 00:12:14.000 }, 00:12:14.000 { 00:12:14.000 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:14.000 "dma_device_type": 2 00:12:14.000 } 00:12:14.000 ], 00:12:14.000 "driver_specific": {} 00:12:14.000 } 00:12:14.000 ] 00:12:14.000 21:39:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.000 21:39:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:14.000 21:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:14.000 21:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:14.000 21:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:14.000 21:39:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.000 21:39:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.000 BaseBdev3 00:12:14.000 21:39:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.000 21:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:12:14.000 21:39:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:12:14.000 21:39:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:14.000 21:39:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:14.000 21:39:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:14.000 21:39:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:14.000 21:39:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:14.000 21:39:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.000 21:39:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.000 21:39:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.000 21:39:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:14.000 21:39:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.000 21:39:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.000 [ 00:12:14.000 { 00:12:14.000 "name": "BaseBdev3", 00:12:14.000 "aliases": [ 00:12:14.000 "775bdf5c-f114-4273-8354-eb692e96cbb5" 00:12:14.000 ], 00:12:14.000 "product_name": "Malloc disk", 00:12:14.000 "block_size": 512, 00:12:14.000 "num_blocks": 65536, 00:12:14.000 "uuid": "775bdf5c-f114-4273-8354-eb692e96cbb5", 00:12:14.000 "assigned_rate_limits": { 00:12:14.000 "rw_ios_per_sec": 0, 00:12:14.000 "rw_mbytes_per_sec": 0, 00:12:14.000 "r_mbytes_per_sec": 0, 00:12:14.000 "w_mbytes_per_sec": 0 00:12:14.000 }, 00:12:14.000 "claimed": false, 00:12:14.000 "zoned": false, 00:12:14.000 "supported_io_types": { 00:12:14.000 "read": true, 00:12:14.000 "write": true, 00:12:14.000 "unmap": true, 00:12:14.000 "flush": true, 00:12:14.000 "reset": true, 00:12:14.000 "nvme_admin": false, 00:12:14.000 "nvme_io": false, 00:12:14.000 "nvme_io_md": false, 00:12:14.000 "write_zeroes": true, 00:12:14.000 "zcopy": true, 00:12:14.000 "get_zone_info": false, 00:12:14.000 "zone_management": false, 00:12:14.000 "zone_append": false, 00:12:14.000 "compare": false, 00:12:14.000 "compare_and_write": false, 00:12:14.000 "abort": true, 00:12:14.000 "seek_hole": false, 00:12:14.000 "seek_data": false, 00:12:14.000 "copy": true, 00:12:14.000 "nvme_iov_md": false 00:12:14.000 }, 00:12:14.000 "memory_domains": [ 00:12:14.000 { 00:12:14.000 "dma_device_id": "system", 00:12:14.000 "dma_device_type": 1 00:12:14.000 }, 00:12:14.000 { 00:12:14.000 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:14.000 "dma_device_type": 2 00:12:14.000 } 00:12:14.000 ], 00:12:14.000 "driver_specific": {} 00:12:14.000 } 00:12:14.000 ] 00:12:14.000 21:39:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.000 21:39:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:14.000 21:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:14.000 21:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:14.000 21:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:12:14.000 21:39:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.000 21:39:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.000 BaseBdev4 00:12:14.000 21:39:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.000 21:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:12:14.000 21:39:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:12:14.000 21:39:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:14.000 21:39:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:14.000 21:39:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:14.000 21:39:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:14.000 21:39:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:14.000 21:39:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.000 21:39:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.000 21:39:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.000 21:39:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:14.000 21:39:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.000 21:39:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.000 [ 00:12:14.000 { 00:12:14.000 "name": "BaseBdev4", 00:12:14.000 "aliases": [ 00:12:14.000 "e601eb43-1f80-46a3-86e8-fc22518d29bd" 00:12:14.000 ], 00:12:14.000 "product_name": "Malloc disk", 00:12:14.000 "block_size": 512, 00:12:14.000 "num_blocks": 65536, 00:12:14.000 "uuid": "e601eb43-1f80-46a3-86e8-fc22518d29bd", 00:12:14.000 "assigned_rate_limits": { 00:12:14.000 "rw_ios_per_sec": 0, 00:12:14.000 "rw_mbytes_per_sec": 0, 00:12:14.000 "r_mbytes_per_sec": 0, 00:12:14.000 "w_mbytes_per_sec": 0 00:12:14.000 }, 00:12:14.000 "claimed": false, 00:12:14.000 "zoned": false, 00:12:14.000 "supported_io_types": { 00:12:14.000 "read": true, 00:12:14.000 "write": true, 00:12:14.000 "unmap": true, 00:12:14.000 "flush": true, 00:12:14.000 "reset": true, 00:12:14.000 "nvme_admin": false, 00:12:14.000 "nvme_io": false, 00:12:14.000 "nvme_io_md": false, 00:12:14.000 "write_zeroes": true, 00:12:14.000 "zcopy": true, 00:12:14.000 "get_zone_info": false, 00:12:14.000 "zone_management": false, 00:12:14.000 "zone_append": false, 00:12:14.000 "compare": false, 00:12:14.000 "compare_and_write": false, 00:12:14.000 "abort": true, 00:12:14.000 "seek_hole": false, 00:12:14.000 "seek_data": false, 00:12:14.000 "copy": true, 00:12:14.000 "nvme_iov_md": false 00:12:14.000 }, 00:12:14.000 "memory_domains": [ 00:12:14.000 { 00:12:14.000 "dma_device_id": "system", 00:12:14.000 "dma_device_type": 1 00:12:14.000 }, 00:12:14.000 { 00:12:14.000 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:14.000 "dma_device_type": 2 00:12:14.000 } 00:12:14.000 ], 00:12:14.000 "driver_specific": {} 00:12:14.000 } 00:12:14.000 ] 00:12:14.000 21:39:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.000 21:39:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:14.000 21:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:14.000 21:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:14.000 21:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:14.000 21:39:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.000 21:39:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.000 [2024-12-10 21:39:14.735383] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:14.000 [2024-12-10 21:39:14.735501] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:14.000 [2024-12-10 21:39:14.735568] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:14.000 [2024-12-10 21:39:14.737676] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:14.000 [2024-12-10 21:39:14.737778] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:14.000 21:39:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.000 21:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:14.000 21:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:14.000 21:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:14.000 21:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:14.001 21:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:14.001 21:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:14.001 21:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:14.001 21:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:14.001 21:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:14.001 21:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:14.001 21:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:14.001 21:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:14.001 21:39:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.001 21:39:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.001 21:39:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.259 21:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:14.259 "name": "Existed_Raid", 00:12:14.259 "uuid": "bee82d8e-fc9d-4acd-b8ef-a4de26d64d44", 00:12:14.259 "strip_size_kb": 64, 00:12:14.259 "state": "configuring", 00:12:14.259 "raid_level": "raid0", 00:12:14.259 "superblock": true, 00:12:14.259 "num_base_bdevs": 4, 00:12:14.259 "num_base_bdevs_discovered": 3, 00:12:14.259 "num_base_bdevs_operational": 4, 00:12:14.259 "base_bdevs_list": [ 00:12:14.259 { 00:12:14.259 "name": "BaseBdev1", 00:12:14.259 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:14.259 "is_configured": false, 00:12:14.259 "data_offset": 0, 00:12:14.259 "data_size": 0 00:12:14.259 }, 00:12:14.259 { 00:12:14.259 "name": "BaseBdev2", 00:12:14.259 "uuid": "b60bf325-e2d2-47ef-8dc0-64bae0331fea", 00:12:14.259 "is_configured": true, 00:12:14.259 "data_offset": 2048, 00:12:14.259 "data_size": 63488 00:12:14.259 }, 00:12:14.259 { 00:12:14.259 "name": "BaseBdev3", 00:12:14.259 "uuid": "775bdf5c-f114-4273-8354-eb692e96cbb5", 00:12:14.259 "is_configured": true, 00:12:14.259 "data_offset": 2048, 00:12:14.259 "data_size": 63488 00:12:14.259 }, 00:12:14.259 { 00:12:14.259 "name": "BaseBdev4", 00:12:14.259 "uuid": "e601eb43-1f80-46a3-86e8-fc22518d29bd", 00:12:14.259 "is_configured": true, 00:12:14.259 "data_offset": 2048, 00:12:14.259 "data_size": 63488 00:12:14.259 } 00:12:14.259 ] 00:12:14.259 }' 00:12:14.259 21:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:14.259 21:39:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.517 21:39:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:12:14.517 21:39:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.517 21:39:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.517 [2024-12-10 21:39:15.218609] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:14.517 21:39:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.517 21:39:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:14.517 21:39:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:14.517 21:39:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:14.517 21:39:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:14.517 21:39:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:14.517 21:39:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:14.517 21:39:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:14.517 21:39:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:14.517 21:39:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:14.517 21:39:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:14.517 21:39:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:14.517 21:39:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.517 21:39:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:14.517 21:39:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.517 21:39:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.517 21:39:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:14.517 "name": "Existed_Raid", 00:12:14.517 "uuid": "bee82d8e-fc9d-4acd-b8ef-a4de26d64d44", 00:12:14.517 "strip_size_kb": 64, 00:12:14.517 "state": "configuring", 00:12:14.517 "raid_level": "raid0", 00:12:14.517 "superblock": true, 00:12:14.517 "num_base_bdevs": 4, 00:12:14.517 "num_base_bdevs_discovered": 2, 00:12:14.517 "num_base_bdevs_operational": 4, 00:12:14.517 "base_bdevs_list": [ 00:12:14.517 { 00:12:14.517 "name": "BaseBdev1", 00:12:14.517 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:14.517 "is_configured": false, 00:12:14.517 "data_offset": 0, 00:12:14.517 "data_size": 0 00:12:14.517 }, 00:12:14.517 { 00:12:14.517 "name": null, 00:12:14.517 "uuid": "b60bf325-e2d2-47ef-8dc0-64bae0331fea", 00:12:14.517 "is_configured": false, 00:12:14.517 "data_offset": 0, 00:12:14.517 "data_size": 63488 00:12:14.518 }, 00:12:14.518 { 00:12:14.518 "name": "BaseBdev3", 00:12:14.518 "uuid": "775bdf5c-f114-4273-8354-eb692e96cbb5", 00:12:14.518 "is_configured": true, 00:12:14.518 "data_offset": 2048, 00:12:14.518 "data_size": 63488 00:12:14.518 }, 00:12:14.518 { 00:12:14.518 "name": "BaseBdev4", 00:12:14.518 "uuid": "e601eb43-1f80-46a3-86e8-fc22518d29bd", 00:12:14.518 "is_configured": true, 00:12:14.518 "data_offset": 2048, 00:12:14.518 "data_size": 63488 00:12:14.518 } 00:12:14.518 ] 00:12:14.518 }' 00:12:14.518 21:39:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:14.518 21:39:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.083 21:39:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:15.083 21:39:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.083 21:39:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.083 21:39:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:15.083 21:39:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.083 21:39:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:12:15.083 21:39:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:15.083 21:39:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.083 21:39:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.083 BaseBdev1 00:12:15.084 [2024-12-10 21:39:15.744800] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:15.084 21:39:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.084 21:39:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:12:15.084 21:39:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:12:15.084 21:39:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:15.084 21:39:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:15.084 21:39:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:15.084 21:39:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:15.084 21:39:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:15.084 21:39:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.084 21:39:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.084 21:39:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.084 21:39:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:15.084 21:39:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.084 21:39:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.084 [ 00:12:15.084 { 00:12:15.084 "name": "BaseBdev1", 00:12:15.084 "aliases": [ 00:12:15.084 "64963894-7b67-4993-8112-74f03d015d57" 00:12:15.084 ], 00:12:15.084 "product_name": "Malloc disk", 00:12:15.084 "block_size": 512, 00:12:15.084 "num_blocks": 65536, 00:12:15.084 "uuid": "64963894-7b67-4993-8112-74f03d015d57", 00:12:15.084 "assigned_rate_limits": { 00:12:15.084 "rw_ios_per_sec": 0, 00:12:15.084 "rw_mbytes_per_sec": 0, 00:12:15.084 "r_mbytes_per_sec": 0, 00:12:15.084 "w_mbytes_per_sec": 0 00:12:15.084 }, 00:12:15.084 "claimed": true, 00:12:15.084 "claim_type": "exclusive_write", 00:12:15.084 "zoned": false, 00:12:15.084 "supported_io_types": { 00:12:15.084 "read": true, 00:12:15.084 "write": true, 00:12:15.084 "unmap": true, 00:12:15.084 "flush": true, 00:12:15.084 "reset": true, 00:12:15.084 "nvme_admin": false, 00:12:15.084 "nvme_io": false, 00:12:15.084 "nvme_io_md": false, 00:12:15.084 "write_zeroes": true, 00:12:15.084 "zcopy": true, 00:12:15.084 "get_zone_info": false, 00:12:15.084 "zone_management": false, 00:12:15.084 "zone_append": false, 00:12:15.084 "compare": false, 00:12:15.084 "compare_and_write": false, 00:12:15.084 "abort": true, 00:12:15.084 "seek_hole": false, 00:12:15.084 "seek_data": false, 00:12:15.084 "copy": true, 00:12:15.084 "nvme_iov_md": false 00:12:15.084 }, 00:12:15.084 "memory_domains": [ 00:12:15.084 { 00:12:15.084 "dma_device_id": "system", 00:12:15.084 "dma_device_type": 1 00:12:15.084 }, 00:12:15.084 { 00:12:15.084 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:15.084 "dma_device_type": 2 00:12:15.084 } 00:12:15.084 ], 00:12:15.084 "driver_specific": {} 00:12:15.084 } 00:12:15.084 ] 00:12:15.084 21:39:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.084 21:39:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:15.084 21:39:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:15.084 21:39:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:15.084 21:39:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:15.084 21:39:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:15.084 21:39:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:15.084 21:39:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:15.084 21:39:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:15.084 21:39:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:15.084 21:39:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:15.084 21:39:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:15.084 21:39:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:15.084 21:39:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:15.084 21:39:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.084 21:39:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.084 21:39:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.084 21:39:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:15.084 "name": "Existed_Raid", 00:12:15.084 "uuid": "bee82d8e-fc9d-4acd-b8ef-a4de26d64d44", 00:12:15.084 "strip_size_kb": 64, 00:12:15.084 "state": "configuring", 00:12:15.084 "raid_level": "raid0", 00:12:15.084 "superblock": true, 00:12:15.084 "num_base_bdevs": 4, 00:12:15.084 "num_base_bdevs_discovered": 3, 00:12:15.084 "num_base_bdevs_operational": 4, 00:12:15.084 "base_bdevs_list": [ 00:12:15.084 { 00:12:15.084 "name": "BaseBdev1", 00:12:15.084 "uuid": "64963894-7b67-4993-8112-74f03d015d57", 00:12:15.084 "is_configured": true, 00:12:15.084 "data_offset": 2048, 00:12:15.084 "data_size": 63488 00:12:15.084 }, 00:12:15.084 { 00:12:15.084 "name": null, 00:12:15.084 "uuid": "b60bf325-e2d2-47ef-8dc0-64bae0331fea", 00:12:15.084 "is_configured": false, 00:12:15.084 "data_offset": 0, 00:12:15.084 "data_size": 63488 00:12:15.084 }, 00:12:15.084 { 00:12:15.084 "name": "BaseBdev3", 00:12:15.084 "uuid": "775bdf5c-f114-4273-8354-eb692e96cbb5", 00:12:15.084 "is_configured": true, 00:12:15.084 "data_offset": 2048, 00:12:15.084 "data_size": 63488 00:12:15.084 }, 00:12:15.084 { 00:12:15.084 "name": "BaseBdev4", 00:12:15.084 "uuid": "e601eb43-1f80-46a3-86e8-fc22518d29bd", 00:12:15.084 "is_configured": true, 00:12:15.084 "data_offset": 2048, 00:12:15.084 "data_size": 63488 00:12:15.084 } 00:12:15.084 ] 00:12:15.084 }' 00:12:15.084 21:39:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:15.084 21:39:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.652 21:39:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:15.652 21:39:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.652 21:39:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.652 21:39:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:15.653 21:39:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.653 21:39:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:12:15.653 21:39:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:12:15.653 21:39:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.653 21:39:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.653 [2024-12-10 21:39:16.327966] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:15.653 21:39:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.653 21:39:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:15.653 21:39:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:15.653 21:39:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:15.653 21:39:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:15.653 21:39:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:15.653 21:39:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:15.653 21:39:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:15.653 21:39:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:15.653 21:39:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:15.653 21:39:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:15.653 21:39:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:15.653 21:39:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:15.653 21:39:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.653 21:39:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.653 21:39:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.653 21:39:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:15.653 "name": "Existed_Raid", 00:12:15.653 "uuid": "bee82d8e-fc9d-4acd-b8ef-a4de26d64d44", 00:12:15.653 "strip_size_kb": 64, 00:12:15.653 "state": "configuring", 00:12:15.653 "raid_level": "raid0", 00:12:15.653 "superblock": true, 00:12:15.653 "num_base_bdevs": 4, 00:12:15.653 "num_base_bdevs_discovered": 2, 00:12:15.653 "num_base_bdevs_operational": 4, 00:12:15.653 "base_bdevs_list": [ 00:12:15.653 { 00:12:15.653 "name": "BaseBdev1", 00:12:15.653 "uuid": "64963894-7b67-4993-8112-74f03d015d57", 00:12:15.653 "is_configured": true, 00:12:15.653 "data_offset": 2048, 00:12:15.653 "data_size": 63488 00:12:15.653 }, 00:12:15.653 { 00:12:15.653 "name": null, 00:12:15.653 "uuid": "b60bf325-e2d2-47ef-8dc0-64bae0331fea", 00:12:15.653 "is_configured": false, 00:12:15.653 "data_offset": 0, 00:12:15.653 "data_size": 63488 00:12:15.653 }, 00:12:15.653 { 00:12:15.653 "name": null, 00:12:15.653 "uuid": "775bdf5c-f114-4273-8354-eb692e96cbb5", 00:12:15.653 "is_configured": false, 00:12:15.653 "data_offset": 0, 00:12:15.653 "data_size": 63488 00:12:15.653 }, 00:12:15.653 { 00:12:15.653 "name": "BaseBdev4", 00:12:15.653 "uuid": "e601eb43-1f80-46a3-86e8-fc22518d29bd", 00:12:15.653 "is_configured": true, 00:12:15.653 "data_offset": 2048, 00:12:15.653 "data_size": 63488 00:12:15.653 } 00:12:15.653 ] 00:12:15.653 }' 00:12:15.653 21:39:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:15.653 21:39:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.221 21:39:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:16.221 21:39:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.221 21:39:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.221 21:39:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:16.221 21:39:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.221 21:39:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:12:16.221 21:39:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:12:16.221 21:39:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.221 21:39:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.221 [2024-12-10 21:39:16.823224] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:16.221 21:39:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.221 21:39:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:16.221 21:39:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:16.221 21:39:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:16.221 21:39:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:16.221 21:39:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:16.221 21:39:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:16.221 21:39:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:16.221 21:39:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:16.221 21:39:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:16.221 21:39:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:16.221 21:39:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:16.222 21:39:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.222 21:39:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.222 21:39:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:16.222 21:39:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.222 21:39:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:16.222 "name": "Existed_Raid", 00:12:16.222 "uuid": "bee82d8e-fc9d-4acd-b8ef-a4de26d64d44", 00:12:16.222 "strip_size_kb": 64, 00:12:16.222 "state": "configuring", 00:12:16.222 "raid_level": "raid0", 00:12:16.222 "superblock": true, 00:12:16.222 "num_base_bdevs": 4, 00:12:16.222 "num_base_bdevs_discovered": 3, 00:12:16.222 "num_base_bdevs_operational": 4, 00:12:16.222 "base_bdevs_list": [ 00:12:16.222 { 00:12:16.222 "name": "BaseBdev1", 00:12:16.222 "uuid": "64963894-7b67-4993-8112-74f03d015d57", 00:12:16.222 "is_configured": true, 00:12:16.222 "data_offset": 2048, 00:12:16.222 "data_size": 63488 00:12:16.222 }, 00:12:16.222 { 00:12:16.222 "name": null, 00:12:16.222 "uuid": "b60bf325-e2d2-47ef-8dc0-64bae0331fea", 00:12:16.222 "is_configured": false, 00:12:16.222 "data_offset": 0, 00:12:16.222 "data_size": 63488 00:12:16.222 }, 00:12:16.222 { 00:12:16.222 "name": "BaseBdev3", 00:12:16.222 "uuid": "775bdf5c-f114-4273-8354-eb692e96cbb5", 00:12:16.222 "is_configured": true, 00:12:16.222 "data_offset": 2048, 00:12:16.222 "data_size": 63488 00:12:16.222 }, 00:12:16.222 { 00:12:16.222 "name": "BaseBdev4", 00:12:16.222 "uuid": "e601eb43-1f80-46a3-86e8-fc22518d29bd", 00:12:16.222 "is_configured": true, 00:12:16.222 "data_offset": 2048, 00:12:16.222 "data_size": 63488 00:12:16.222 } 00:12:16.222 ] 00:12:16.222 }' 00:12:16.222 21:39:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:16.222 21:39:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.791 21:39:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:16.791 21:39:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:16.791 21:39:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.791 21:39:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.791 21:39:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.791 21:39:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:12:16.791 21:39:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:16.791 21:39:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.791 21:39:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.791 [2024-12-10 21:39:17.346387] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:16.791 21:39:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.791 21:39:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:16.791 21:39:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:16.791 21:39:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:16.791 21:39:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:16.791 21:39:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:16.791 21:39:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:16.791 21:39:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:16.791 21:39:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:16.791 21:39:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:16.791 21:39:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:16.791 21:39:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:16.791 21:39:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:16.791 21:39:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.791 21:39:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.791 21:39:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.791 21:39:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:16.791 "name": "Existed_Raid", 00:12:16.791 "uuid": "bee82d8e-fc9d-4acd-b8ef-a4de26d64d44", 00:12:16.791 "strip_size_kb": 64, 00:12:16.791 "state": "configuring", 00:12:16.791 "raid_level": "raid0", 00:12:16.791 "superblock": true, 00:12:16.791 "num_base_bdevs": 4, 00:12:16.791 "num_base_bdevs_discovered": 2, 00:12:16.791 "num_base_bdevs_operational": 4, 00:12:16.791 "base_bdevs_list": [ 00:12:16.791 { 00:12:16.791 "name": null, 00:12:16.791 "uuid": "64963894-7b67-4993-8112-74f03d015d57", 00:12:16.791 "is_configured": false, 00:12:16.791 "data_offset": 0, 00:12:16.791 "data_size": 63488 00:12:16.791 }, 00:12:16.791 { 00:12:16.791 "name": null, 00:12:16.791 "uuid": "b60bf325-e2d2-47ef-8dc0-64bae0331fea", 00:12:16.791 "is_configured": false, 00:12:16.791 "data_offset": 0, 00:12:16.791 "data_size": 63488 00:12:16.791 }, 00:12:16.791 { 00:12:16.791 "name": "BaseBdev3", 00:12:16.791 "uuid": "775bdf5c-f114-4273-8354-eb692e96cbb5", 00:12:16.791 "is_configured": true, 00:12:16.791 "data_offset": 2048, 00:12:16.791 "data_size": 63488 00:12:16.791 }, 00:12:16.791 { 00:12:16.791 "name": "BaseBdev4", 00:12:16.791 "uuid": "e601eb43-1f80-46a3-86e8-fc22518d29bd", 00:12:16.791 "is_configured": true, 00:12:16.791 "data_offset": 2048, 00:12:16.791 "data_size": 63488 00:12:16.791 } 00:12:16.791 ] 00:12:16.791 }' 00:12:16.791 21:39:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:16.791 21:39:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.361 21:39:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:17.361 21:39:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:17.361 21:39:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.361 21:39:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.361 21:39:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.361 21:39:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:12:17.361 21:39:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:12:17.361 21:39:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.361 21:39:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.361 [2024-12-10 21:39:17.942382] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:17.361 21:39:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.361 21:39:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:17.361 21:39:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:17.361 21:39:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:17.361 21:39:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:17.361 21:39:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:17.361 21:39:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:17.361 21:39:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:17.361 21:39:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:17.361 21:39:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:17.361 21:39:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:17.361 21:39:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:17.361 21:39:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:17.361 21:39:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.361 21:39:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.361 21:39:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.361 21:39:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:17.361 "name": "Existed_Raid", 00:12:17.361 "uuid": "bee82d8e-fc9d-4acd-b8ef-a4de26d64d44", 00:12:17.361 "strip_size_kb": 64, 00:12:17.361 "state": "configuring", 00:12:17.361 "raid_level": "raid0", 00:12:17.361 "superblock": true, 00:12:17.361 "num_base_bdevs": 4, 00:12:17.361 "num_base_bdevs_discovered": 3, 00:12:17.361 "num_base_bdevs_operational": 4, 00:12:17.361 "base_bdevs_list": [ 00:12:17.361 { 00:12:17.361 "name": null, 00:12:17.361 "uuid": "64963894-7b67-4993-8112-74f03d015d57", 00:12:17.361 "is_configured": false, 00:12:17.361 "data_offset": 0, 00:12:17.361 "data_size": 63488 00:12:17.361 }, 00:12:17.361 { 00:12:17.361 "name": "BaseBdev2", 00:12:17.361 "uuid": "b60bf325-e2d2-47ef-8dc0-64bae0331fea", 00:12:17.361 "is_configured": true, 00:12:17.361 "data_offset": 2048, 00:12:17.361 "data_size": 63488 00:12:17.361 }, 00:12:17.361 { 00:12:17.361 "name": "BaseBdev3", 00:12:17.361 "uuid": "775bdf5c-f114-4273-8354-eb692e96cbb5", 00:12:17.361 "is_configured": true, 00:12:17.361 "data_offset": 2048, 00:12:17.361 "data_size": 63488 00:12:17.361 }, 00:12:17.361 { 00:12:17.361 "name": "BaseBdev4", 00:12:17.361 "uuid": "e601eb43-1f80-46a3-86e8-fc22518d29bd", 00:12:17.361 "is_configured": true, 00:12:17.361 "data_offset": 2048, 00:12:17.361 "data_size": 63488 00:12:17.361 } 00:12:17.361 ] 00:12:17.361 }' 00:12:17.361 21:39:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:17.361 21:39:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.935 21:39:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:17.935 21:39:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.935 21:39:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.935 21:39:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:17.935 21:39:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.935 21:39:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:12:17.935 21:39:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:12:17.935 21:39:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:17.935 21:39:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.935 21:39:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.935 21:39:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.935 21:39:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 64963894-7b67-4993-8112-74f03d015d57 00:12:17.935 21:39:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.935 21:39:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.935 [2024-12-10 21:39:18.564041] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:12:17.935 [2024-12-10 21:39:18.564471] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:17.935 [2024-12-10 21:39:18.564527] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:12:17.935 [2024-12-10 21:39:18.564847] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:12:17.935 [2024-12-10 21:39:18.565051] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:17.935 [2024-12-10 21:39:18.565098] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:12:17.935 NewBaseBdev 00:12:17.935 [2024-12-10 21:39:18.565290] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:17.935 21:39:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.935 21:39:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:12:17.935 21:39:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:12:17.935 21:39:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:17.935 21:39:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:17.935 21:39:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:17.935 21:39:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:17.935 21:39:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:17.935 21:39:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.935 21:39:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.935 21:39:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.935 21:39:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:12:17.935 21:39:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.935 21:39:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.935 [ 00:12:17.935 { 00:12:17.935 "name": "NewBaseBdev", 00:12:17.935 "aliases": [ 00:12:17.935 "64963894-7b67-4993-8112-74f03d015d57" 00:12:17.935 ], 00:12:17.935 "product_name": "Malloc disk", 00:12:17.935 "block_size": 512, 00:12:17.935 "num_blocks": 65536, 00:12:17.935 "uuid": "64963894-7b67-4993-8112-74f03d015d57", 00:12:17.935 "assigned_rate_limits": { 00:12:17.935 "rw_ios_per_sec": 0, 00:12:17.935 "rw_mbytes_per_sec": 0, 00:12:17.935 "r_mbytes_per_sec": 0, 00:12:17.935 "w_mbytes_per_sec": 0 00:12:17.935 }, 00:12:17.935 "claimed": true, 00:12:17.935 "claim_type": "exclusive_write", 00:12:17.935 "zoned": false, 00:12:17.935 "supported_io_types": { 00:12:17.935 "read": true, 00:12:17.935 "write": true, 00:12:17.935 "unmap": true, 00:12:17.935 "flush": true, 00:12:17.935 "reset": true, 00:12:17.935 "nvme_admin": false, 00:12:17.935 "nvme_io": false, 00:12:17.935 "nvme_io_md": false, 00:12:17.935 "write_zeroes": true, 00:12:17.935 "zcopy": true, 00:12:17.935 "get_zone_info": false, 00:12:17.935 "zone_management": false, 00:12:17.935 "zone_append": false, 00:12:17.935 "compare": false, 00:12:17.935 "compare_and_write": false, 00:12:17.935 "abort": true, 00:12:17.935 "seek_hole": false, 00:12:17.935 "seek_data": false, 00:12:17.935 "copy": true, 00:12:17.935 "nvme_iov_md": false 00:12:17.936 }, 00:12:17.936 "memory_domains": [ 00:12:17.936 { 00:12:17.936 "dma_device_id": "system", 00:12:17.936 "dma_device_type": 1 00:12:17.936 }, 00:12:17.936 { 00:12:17.936 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:17.936 "dma_device_type": 2 00:12:17.936 } 00:12:17.936 ], 00:12:17.936 "driver_specific": {} 00:12:17.936 } 00:12:17.936 ] 00:12:17.936 21:39:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.936 21:39:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:17.936 21:39:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:12:17.936 21:39:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:17.936 21:39:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:17.936 21:39:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:17.936 21:39:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:17.936 21:39:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:17.936 21:39:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:17.936 21:39:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:17.936 21:39:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:17.936 21:39:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:17.936 21:39:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:17.936 21:39:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:17.936 21:39:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.936 21:39:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.936 21:39:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.936 21:39:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:17.936 "name": "Existed_Raid", 00:12:17.936 "uuid": "bee82d8e-fc9d-4acd-b8ef-a4de26d64d44", 00:12:17.936 "strip_size_kb": 64, 00:12:17.936 "state": "online", 00:12:17.936 "raid_level": "raid0", 00:12:17.936 "superblock": true, 00:12:17.936 "num_base_bdevs": 4, 00:12:17.936 "num_base_bdevs_discovered": 4, 00:12:17.936 "num_base_bdevs_operational": 4, 00:12:17.936 "base_bdevs_list": [ 00:12:17.936 { 00:12:17.936 "name": "NewBaseBdev", 00:12:17.936 "uuid": "64963894-7b67-4993-8112-74f03d015d57", 00:12:17.936 "is_configured": true, 00:12:17.936 "data_offset": 2048, 00:12:17.936 "data_size": 63488 00:12:17.936 }, 00:12:17.936 { 00:12:17.936 "name": "BaseBdev2", 00:12:17.936 "uuid": "b60bf325-e2d2-47ef-8dc0-64bae0331fea", 00:12:17.936 "is_configured": true, 00:12:17.936 "data_offset": 2048, 00:12:17.936 "data_size": 63488 00:12:17.936 }, 00:12:17.936 { 00:12:17.936 "name": "BaseBdev3", 00:12:17.936 "uuid": "775bdf5c-f114-4273-8354-eb692e96cbb5", 00:12:17.936 "is_configured": true, 00:12:17.936 "data_offset": 2048, 00:12:17.936 "data_size": 63488 00:12:17.936 }, 00:12:17.936 { 00:12:17.936 "name": "BaseBdev4", 00:12:17.936 "uuid": "e601eb43-1f80-46a3-86e8-fc22518d29bd", 00:12:17.936 "is_configured": true, 00:12:17.936 "data_offset": 2048, 00:12:17.936 "data_size": 63488 00:12:17.936 } 00:12:17.936 ] 00:12:17.936 }' 00:12:17.936 21:39:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:17.936 21:39:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.505 21:39:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:12:18.505 21:39:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:18.505 21:39:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:18.505 21:39:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:18.505 21:39:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:12:18.505 21:39:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:18.505 21:39:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:18.505 21:39:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.505 21:39:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.505 21:39:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:18.505 [2024-12-10 21:39:19.087722] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:18.505 21:39:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.505 21:39:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:18.505 "name": "Existed_Raid", 00:12:18.505 "aliases": [ 00:12:18.505 "bee82d8e-fc9d-4acd-b8ef-a4de26d64d44" 00:12:18.505 ], 00:12:18.505 "product_name": "Raid Volume", 00:12:18.505 "block_size": 512, 00:12:18.505 "num_blocks": 253952, 00:12:18.505 "uuid": "bee82d8e-fc9d-4acd-b8ef-a4de26d64d44", 00:12:18.505 "assigned_rate_limits": { 00:12:18.505 "rw_ios_per_sec": 0, 00:12:18.505 "rw_mbytes_per_sec": 0, 00:12:18.505 "r_mbytes_per_sec": 0, 00:12:18.505 "w_mbytes_per_sec": 0 00:12:18.505 }, 00:12:18.505 "claimed": false, 00:12:18.505 "zoned": false, 00:12:18.505 "supported_io_types": { 00:12:18.505 "read": true, 00:12:18.505 "write": true, 00:12:18.505 "unmap": true, 00:12:18.505 "flush": true, 00:12:18.505 "reset": true, 00:12:18.505 "nvme_admin": false, 00:12:18.505 "nvme_io": false, 00:12:18.505 "nvme_io_md": false, 00:12:18.505 "write_zeroes": true, 00:12:18.505 "zcopy": false, 00:12:18.505 "get_zone_info": false, 00:12:18.505 "zone_management": false, 00:12:18.505 "zone_append": false, 00:12:18.505 "compare": false, 00:12:18.505 "compare_and_write": false, 00:12:18.505 "abort": false, 00:12:18.505 "seek_hole": false, 00:12:18.505 "seek_data": false, 00:12:18.505 "copy": false, 00:12:18.505 "nvme_iov_md": false 00:12:18.505 }, 00:12:18.505 "memory_domains": [ 00:12:18.505 { 00:12:18.505 "dma_device_id": "system", 00:12:18.505 "dma_device_type": 1 00:12:18.505 }, 00:12:18.505 { 00:12:18.505 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:18.505 "dma_device_type": 2 00:12:18.505 }, 00:12:18.505 { 00:12:18.505 "dma_device_id": "system", 00:12:18.505 "dma_device_type": 1 00:12:18.505 }, 00:12:18.505 { 00:12:18.505 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:18.505 "dma_device_type": 2 00:12:18.505 }, 00:12:18.505 { 00:12:18.505 "dma_device_id": "system", 00:12:18.505 "dma_device_type": 1 00:12:18.505 }, 00:12:18.505 { 00:12:18.505 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:18.505 "dma_device_type": 2 00:12:18.505 }, 00:12:18.505 { 00:12:18.505 "dma_device_id": "system", 00:12:18.505 "dma_device_type": 1 00:12:18.505 }, 00:12:18.505 { 00:12:18.505 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:18.505 "dma_device_type": 2 00:12:18.505 } 00:12:18.505 ], 00:12:18.505 "driver_specific": { 00:12:18.505 "raid": { 00:12:18.505 "uuid": "bee82d8e-fc9d-4acd-b8ef-a4de26d64d44", 00:12:18.505 "strip_size_kb": 64, 00:12:18.505 "state": "online", 00:12:18.505 "raid_level": "raid0", 00:12:18.505 "superblock": true, 00:12:18.505 "num_base_bdevs": 4, 00:12:18.505 "num_base_bdevs_discovered": 4, 00:12:18.505 "num_base_bdevs_operational": 4, 00:12:18.505 "base_bdevs_list": [ 00:12:18.505 { 00:12:18.505 "name": "NewBaseBdev", 00:12:18.505 "uuid": "64963894-7b67-4993-8112-74f03d015d57", 00:12:18.505 "is_configured": true, 00:12:18.505 "data_offset": 2048, 00:12:18.505 "data_size": 63488 00:12:18.505 }, 00:12:18.505 { 00:12:18.505 "name": "BaseBdev2", 00:12:18.505 "uuid": "b60bf325-e2d2-47ef-8dc0-64bae0331fea", 00:12:18.505 "is_configured": true, 00:12:18.505 "data_offset": 2048, 00:12:18.505 "data_size": 63488 00:12:18.505 }, 00:12:18.505 { 00:12:18.505 "name": "BaseBdev3", 00:12:18.505 "uuid": "775bdf5c-f114-4273-8354-eb692e96cbb5", 00:12:18.505 "is_configured": true, 00:12:18.505 "data_offset": 2048, 00:12:18.505 "data_size": 63488 00:12:18.505 }, 00:12:18.505 { 00:12:18.505 "name": "BaseBdev4", 00:12:18.505 "uuid": "e601eb43-1f80-46a3-86e8-fc22518d29bd", 00:12:18.505 "is_configured": true, 00:12:18.505 "data_offset": 2048, 00:12:18.505 "data_size": 63488 00:12:18.505 } 00:12:18.505 ] 00:12:18.505 } 00:12:18.505 } 00:12:18.505 }' 00:12:18.505 21:39:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:18.505 21:39:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:12:18.505 BaseBdev2 00:12:18.505 BaseBdev3 00:12:18.505 BaseBdev4' 00:12:18.505 21:39:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:18.505 21:39:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:18.505 21:39:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:18.505 21:39:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:12:18.505 21:39:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.505 21:39:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.505 21:39:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:18.505 21:39:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.505 21:39:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:18.505 21:39:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:18.505 21:39:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:18.505 21:39:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:18.505 21:39:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.505 21:39:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.505 21:39:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:18.505 21:39:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.765 21:39:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:18.765 21:39:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:18.765 21:39:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:18.765 21:39:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:18.765 21:39:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:18.765 21:39:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.765 21:39:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.765 21:39:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.765 21:39:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:18.765 21:39:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:18.765 21:39:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:18.765 21:39:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:12:18.765 21:39:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.765 21:39:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.765 21:39:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:18.765 21:39:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.765 21:39:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:18.765 21:39:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:18.765 21:39:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:18.765 21:39:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.765 21:39:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.765 [2024-12-10 21:39:19.414733] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:18.765 [2024-12-10 21:39:19.414768] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:18.765 [2024-12-10 21:39:19.414857] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:18.766 [2024-12-10 21:39:19.414932] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:18.766 [2024-12-10 21:39:19.414942] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:12:18.766 21:39:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.766 21:39:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 70179 00:12:18.766 21:39:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 70179 ']' 00:12:18.766 21:39:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 70179 00:12:18.766 21:39:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:12:18.766 21:39:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:18.766 21:39:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70179 00:12:18.766 21:39:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:18.766 21:39:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:18.766 21:39:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70179' 00:12:18.766 killing process with pid 70179 00:12:18.766 21:39:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 70179 00:12:18.766 [2024-12-10 21:39:19.453822] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:18.766 21:39:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 70179 00:12:19.338 [2024-12-10 21:39:19.912568] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:20.721 21:39:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:12:20.721 00:12:20.721 real 0m12.478s 00:12:20.721 user 0m19.794s 00:12:20.721 sys 0m2.098s 00:12:20.721 21:39:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:20.721 ************************************ 00:12:20.721 END TEST raid_state_function_test_sb 00:12:20.721 ************************************ 00:12:20.721 21:39:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:20.721 21:39:21 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 4 00:12:20.721 21:39:21 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:20.721 21:39:21 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:20.721 21:39:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:20.721 ************************************ 00:12:20.721 START TEST raid_superblock_test 00:12:20.721 ************************************ 00:12:20.721 21:39:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 4 00:12:20.721 21:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:12:20.721 21:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:12:20.721 21:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:12:20.721 21:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:12:20.721 21:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:12:20.721 21:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:12:20.721 21:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:12:20.721 21:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:12:20.721 21:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:12:20.721 21:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:12:20.721 21:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:12:20.721 21:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:12:20.721 21:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:12:20.721 21:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:12:20.721 21:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:12:20.721 21:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:12:20.721 21:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=70855 00:12:20.721 21:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:12:20.721 21:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 70855 00:12:20.721 21:39:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 70855 ']' 00:12:20.721 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:20.721 21:39:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:20.721 21:39:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:20.721 21:39:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:20.721 21:39:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:20.721 21:39:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.721 [2024-12-10 21:39:21.361641] Starting SPDK v25.01-pre git sha1 cec5ba284 / DPDK 24.03.0 initialization... 00:12:20.721 [2024-12-10 21:39:21.361863] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70855 ] 00:12:20.980 [2024-12-10 21:39:21.537492] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:20.980 [2024-12-10 21:39:21.666178] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:12:21.238 [2024-12-10 21:39:21.897406] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:21.238 [2024-12-10 21:39:21.897576] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:21.497 21:39:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:21.497 21:39:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:12:21.497 21:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:12:21.497 21:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:21.497 21:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:12:21.497 21:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:12:21.497 21:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:12:21.497 21:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:21.497 21:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:21.497 21:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:21.497 21:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:12:21.497 21:39:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.497 21:39:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.756 malloc1 00:12:21.756 21:39:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.756 21:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:21.756 21:39:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.756 21:39:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.756 [2024-12-10 21:39:22.302293] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:21.756 [2024-12-10 21:39:22.302453] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:21.756 [2024-12-10 21:39:22.302505] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:21.756 [2024-12-10 21:39:22.302547] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:21.756 [2024-12-10 21:39:22.305014] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:21.756 [2024-12-10 21:39:22.305100] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:21.756 pt1 00:12:21.756 21:39:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.756 21:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:21.756 21:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:21.756 21:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:12:21.756 21:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:12:21.756 21:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:12:21.756 21:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:21.756 21:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:21.756 21:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:21.756 21:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:12:21.756 21:39:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.756 21:39:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.756 malloc2 00:12:21.756 21:39:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.756 21:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:21.756 21:39:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.756 21:39:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.756 [2024-12-10 21:39:22.364906] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:21.756 [2024-12-10 21:39:22.364989] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:21.756 [2024-12-10 21:39:22.365021] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:21.756 [2024-12-10 21:39:22.365032] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:21.756 [2024-12-10 21:39:22.367573] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:21.756 [2024-12-10 21:39:22.367643] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:21.756 pt2 00:12:21.756 21:39:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.756 21:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:21.756 21:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:21.756 21:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:12:21.756 21:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:12:21.756 21:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:12:21.756 21:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:21.756 21:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:21.756 21:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:21.756 21:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:12:21.756 21:39:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.756 21:39:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.756 malloc3 00:12:21.756 21:39:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.756 21:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:21.756 21:39:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.756 21:39:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.756 [2024-12-10 21:39:22.440500] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:21.756 [2024-12-10 21:39:22.440638] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:21.756 [2024-12-10 21:39:22.440697] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:12:21.756 [2024-12-10 21:39:22.440732] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:21.756 [2024-12-10 21:39:22.443091] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:21.756 [2024-12-10 21:39:22.443170] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:21.756 pt3 00:12:21.756 21:39:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.756 21:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:21.756 21:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:21.756 21:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:12:21.756 21:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:12:21.756 21:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:12:21.756 21:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:21.756 21:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:21.756 21:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:21.756 21:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:12:21.756 21:39:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.756 21:39:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.756 malloc4 00:12:21.756 21:39:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.756 21:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:12:21.756 21:39:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.756 21:39:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.756 [2024-12-10 21:39:22.508114] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:12:21.756 [2024-12-10 21:39:22.508265] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:21.756 [2024-12-10 21:39:22.508342] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:12:21.756 [2024-12-10 21:39:22.508380] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:21.756 [2024-12-10 21:39:22.510818] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:21.756 [2024-12-10 21:39:22.510895] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:12:21.756 pt4 00:12:21.756 21:39:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.756 21:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:21.756 21:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:21.756 21:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:12:21.756 21:39:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.756 21:39:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.756 [2024-12-10 21:39:22.520099] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:21.757 [2024-12-10 21:39:22.522106] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:21.757 [2024-12-10 21:39:22.522244] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:21.757 [2024-12-10 21:39:22.522338] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:12:21.757 [2024-12-10 21:39:22.522573] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:21.757 [2024-12-10 21:39:22.522623] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:12:21.757 [2024-12-10 21:39:22.522930] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:21.757 [2024-12-10 21:39:22.523152] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:21.757 [2024-12-10 21:39:22.523202] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:12:21.757 [2024-12-10 21:39:22.523413] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:21.757 21:39:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.757 21:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:12:21.757 21:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:21.757 21:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:21.757 21:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:21.757 21:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:21.757 21:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:21.757 21:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:21.757 21:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:21.757 21:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:21.757 21:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:21.757 21:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:21.757 21:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:21.757 21:39:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.757 21:39:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.015 21:39:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.015 21:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:22.015 "name": "raid_bdev1", 00:12:22.015 "uuid": "c840e7d2-2572-45e4-ba35-c4f857c812a2", 00:12:22.015 "strip_size_kb": 64, 00:12:22.015 "state": "online", 00:12:22.015 "raid_level": "raid0", 00:12:22.015 "superblock": true, 00:12:22.015 "num_base_bdevs": 4, 00:12:22.015 "num_base_bdevs_discovered": 4, 00:12:22.015 "num_base_bdevs_operational": 4, 00:12:22.015 "base_bdevs_list": [ 00:12:22.015 { 00:12:22.015 "name": "pt1", 00:12:22.015 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:22.015 "is_configured": true, 00:12:22.015 "data_offset": 2048, 00:12:22.015 "data_size": 63488 00:12:22.015 }, 00:12:22.015 { 00:12:22.015 "name": "pt2", 00:12:22.015 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:22.015 "is_configured": true, 00:12:22.015 "data_offset": 2048, 00:12:22.015 "data_size": 63488 00:12:22.015 }, 00:12:22.015 { 00:12:22.015 "name": "pt3", 00:12:22.015 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:22.015 "is_configured": true, 00:12:22.015 "data_offset": 2048, 00:12:22.015 "data_size": 63488 00:12:22.015 }, 00:12:22.015 { 00:12:22.015 "name": "pt4", 00:12:22.015 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:22.015 "is_configured": true, 00:12:22.015 "data_offset": 2048, 00:12:22.015 "data_size": 63488 00:12:22.015 } 00:12:22.015 ] 00:12:22.015 }' 00:12:22.015 21:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:22.015 21:39:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.298 21:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:12:22.298 21:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:12:22.298 21:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:22.298 21:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:22.298 21:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:22.298 21:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:22.298 21:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:22.298 21:39:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.298 21:39:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.298 21:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:22.298 [2024-12-10 21:39:22.979751] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:22.298 21:39:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.298 21:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:22.298 "name": "raid_bdev1", 00:12:22.298 "aliases": [ 00:12:22.298 "c840e7d2-2572-45e4-ba35-c4f857c812a2" 00:12:22.298 ], 00:12:22.298 "product_name": "Raid Volume", 00:12:22.298 "block_size": 512, 00:12:22.298 "num_blocks": 253952, 00:12:22.298 "uuid": "c840e7d2-2572-45e4-ba35-c4f857c812a2", 00:12:22.298 "assigned_rate_limits": { 00:12:22.298 "rw_ios_per_sec": 0, 00:12:22.298 "rw_mbytes_per_sec": 0, 00:12:22.298 "r_mbytes_per_sec": 0, 00:12:22.298 "w_mbytes_per_sec": 0 00:12:22.298 }, 00:12:22.298 "claimed": false, 00:12:22.298 "zoned": false, 00:12:22.298 "supported_io_types": { 00:12:22.298 "read": true, 00:12:22.298 "write": true, 00:12:22.298 "unmap": true, 00:12:22.298 "flush": true, 00:12:22.298 "reset": true, 00:12:22.298 "nvme_admin": false, 00:12:22.298 "nvme_io": false, 00:12:22.298 "nvme_io_md": false, 00:12:22.298 "write_zeroes": true, 00:12:22.298 "zcopy": false, 00:12:22.298 "get_zone_info": false, 00:12:22.298 "zone_management": false, 00:12:22.298 "zone_append": false, 00:12:22.298 "compare": false, 00:12:22.298 "compare_and_write": false, 00:12:22.298 "abort": false, 00:12:22.298 "seek_hole": false, 00:12:22.298 "seek_data": false, 00:12:22.298 "copy": false, 00:12:22.298 "nvme_iov_md": false 00:12:22.298 }, 00:12:22.298 "memory_domains": [ 00:12:22.298 { 00:12:22.298 "dma_device_id": "system", 00:12:22.298 "dma_device_type": 1 00:12:22.298 }, 00:12:22.298 { 00:12:22.298 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:22.298 "dma_device_type": 2 00:12:22.298 }, 00:12:22.298 { 00:12:22.298 "dma_device_id": "system", 00:12:22.298 "dma_device_type": 1 00:12:22.298 }, 00:12:22.298 { 00:12:22.298 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:22.298 "dma_device_type": 2 00:12:22.298 }, 00:12:22.298 { 00:12:22.298 "dma_device_id": "system", 00:12:22.298 "dma_device_type": 1 00:12:22.298 }, 00:12:22.298 { 00:12:22.298 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:22.298 "dma_device_type": 2 00:12:22.298 }, 00:12:22.298 { 00:12:22.298 "dma_device_id": "system", 00:12:22.298 "dma_device_type": 1 00:12:22.298 }, 00:12:22.298 { 00:12:22.298 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:22.299 "dma_device_type": 2 00:12:22.299 } 00:12:22.299 ], 00:12:22.299 "driver_specific": { 00:12:22.299 "raid": { 00:12:22.299 "uuid": "c840e7d2-2572-45e4-ba35-c4f857c812a2", 00:12:22.299 "strip_size_kb": 64, 00:12:22.299 "state": "online", 00:12:22.299 "raid_level": "raid0", 00:12:22.299 "superblock": true, 00:12:22.299 "num_base_bdevs": 4, 00:12:22.299 "num_base_bdevs_discovered": 4, 00:12:22.299 "num_base_bdevs_operational": 4, 00:12:22.299 "base_bdevs_list": [ 00:12:22.299 { 00:12:22.299 "name": "pt1", 00:12:22.299 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:22.299 "is_configured": true, 00:12:22.299 "data_offset": 2048, 00:12:22.299 "data_size": 63488 00:12:22.299 }, 00:12:22.299 { 00:12:22.299 "name": "pt2", 00:12:22.299 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:22.299 "is_configured": true, 00:12:22.299 "data_offset": 2048, 00:12:22.299 "data_size": 63488 00:12:22.299 }, 00:12:22.299 { 00:12:22.299 "name": "pt3", 00:12:22.299 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:22.299 "is_configured": true, 00:12:22.299 "data_offset": 2048, 00:12:22.299 "data_size": 63488 00:12:22.299 }, 00:12:22.299 { 00:12:22.299 "name": "pt4", 00:12:22.299 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:22.299 "is_configured": true, 00:12:22.299 "data_offset": 2048, 00:12:22.299 "data_size": 63488 00:12:22.299 } 00:12:22.299 ] 00:12:22.299 } 00:12:22.299 } 00:12:22.299 }' 00:12:22.299 21:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:22.299 21:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:12:22.299 pt2 00:12:22.299 pt3 00:12:22.299 pt4' 00:12:22.299 21:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:22.570 21:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:22.570 21:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:22.570 21:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:22.570 21:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:12:22.570 21:39:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.570 21:39:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.570 21:39:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.570 21:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:22.570 21:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:22.570 21:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:22.570 21:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:12:22.570 21:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:22.570 21:39:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.570 21:39:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.570 21:39:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.570 21:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:22.570 21:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:22.570 21:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:22.570 21:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:22.570 21:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:12:22.570 21:39:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.570 21:39:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.570 21:39:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.570 21:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:22.570 21:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:22.570 21:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:22.570 21:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:12:22.570 21:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:22.570 21:39:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.570 21:39:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.570 21:39:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.570 21:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:22.570 21:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:22.570 21:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:22.570 21:39:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.570 21:39:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.570 21:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:12:22.570 [2024-12-10 21:39:23.295344] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:22.570 21:39:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.570 21:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=c840e7d2-2572-45e4-ba35-c4f857c812a2 00:12:22.570 21:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z c840e7d2-2572-45e4-ba35-c4f857c812a2 ']' 00:12:22.570 21:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:22.570 21:39:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.570 21:39:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.570 [2024-12-10 21:39:23.342747] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:22.570 [2024-12-10 21:39:23.342780] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:22.570 [2024-12-10 21:39:23.342876] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:22.570 [2024-12-10 21:39:23.342950] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:22.570 [2024-12-10 21:39:23.342966] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:12:22.570 21:39:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.829 21:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:22.829 21:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:12:22.829 21:39:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.829 21:39:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.829 21:39:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.829 21:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:12:22.829 21:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:12:22.829 21:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:22.829 21:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:12:22.829 21:39:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.829 21:39:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.829 21:39:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.829 21:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:22.829 21:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:12:22.829 21:39:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.829 21:39:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.829 21:39:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.829 21:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:22.829 21:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:12:22.829 21:39:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.829 21:39:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.829 21:39:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.829 21:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:22.829 21:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:12:22.829 21:39:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.829 21:39:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.829 21:39:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.829 21:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:12:22.829 21:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:12:22.829 21:39:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.829 21:39:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.829 21:39:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.829 21:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:12:22.829 21:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:12:22.829 21:39:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:12:22.829 21:39:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:12:22.829 21:39:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:12:22.829 21:39:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:22.829 21:39:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:12:22.829 21:39:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:22.829 21:39:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:12:22.829 21:39:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.829 21:39:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.829 [2024-12-10 21:39:23.506579] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:12:22.829 [2024-12-10 21:39:23.508682] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:12:22.829 [2024-12-10 21:39:23.508790] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:12:22.829 [2024-12-10 21:39:23.508851] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:12:22.829 [2024-12-10 21:39:23.508941] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:12:22.829 [2024-12-10 21:39:23.509063] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:12:22.829 [2024-12-10 21:39:23.509126] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:12:22.829 [2024-12-10 21:39:23.509198] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:12:22.829 [2024-12-10 21:39:23.509252] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:22.829 [2024-12-10 21:39:23.509293] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:12:22.829 request: 00:12:22.829 { 00:12:22.829 "name": "raid_bdev1", 00:12:22.829 "raid_level": "raid0", 00:12:22.829 "base_bdevs": [ 00:12:22.829 "malloc1", 00:12:22.829 "malloc2", 00:12:22.829 "malloc3", 00:12:22.829 "malloc4" 00:12:22.829 ], 00:12:22.829 "strip_size_kb": 64, 00:12:22.829 "superblock": false, 00:12:22.829 "method": "bdev_raid_create", 00:12:22.829 "req_id": 1 00:12:22.829 } 00:12:22.829 Got JSON-RPC error response 00:12:22.829 response: 00:12:22.829 { 00:12:22.829 "code": -17, 00:12:22.829 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:12:22.829 } 00:12:22.829 21:39:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:12:22.829 21:39:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:12:22.829 21:39:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:22.829 21:39:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:22.829 21:39:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:22.829 21:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:22.829 21:39:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.830 21:39:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.830 21:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:12:22.830 21:39:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.830 21:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:12:22.830 21:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:12:22.830 21:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:22.830 21:39:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.830 21:39:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.830 [2024-12-10 21:39:23.574395] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:22.830 [2024-12-10 21:39:23.574482] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:22.830 [2024-12-10 21:39:23.574502] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:22.830 [2024-12-10 21:39:23.574514] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:22.830 [2024-12-10 21:39:23.576924] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:22.830 [2024-12-10 21:39:23.576969] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:22.830 [2024-12-10 21:39:23.577063] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:12:22.830 [2024-12-10 21:39:23.577131] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:22.830 pt1 00:12:22.830 21:39:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.830 21:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:12:22.830 21:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:22.830 21:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:22.830 21:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:22.830 21:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:22.830 21:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:22.830 21:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:22.830 21:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:22.830 21:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:22.830 21:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:22.830 21:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:22.830 21:39:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.830 21:39:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.830 21:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:22.830 21:39:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.088 21:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:23.088 "name": "raid_bdev1", 00:12:23.088 "uuid": "c840e7d2-2572-45e4-ba35-c4f857c812a2", 00:12:23.088 "strip_size_kb": 64, 00:12:23.088 "state": "configuring", 00:12:23.088 "raid_level": "raid0", 00:12:23.088 "superblock": true, 00:12:23.088 "num_base_bdevs": 4, 00:12:23.088 "num_base_bdevs_discovered": 1, 00:12:23.088 "num_base_bdevs_operational": 4, 00:12:23.088 "base_bdevs_list": [ 00:12:23.088 { 00:12:23.088 "name": "pt1", 00:12:23.088 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:23.088 "is_configured": true, 00:12:23.088 "data_offset": 2048, 00:12:23.088 "data_size": 63488 00:12:23.088 }, 00:12:23.088 { 00:12:23.088 "name": null, 00:12:23.088 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:23.088 "is_configured": false, 00:12:23.088 "data_offset": 2048, 00:12:23.088 "data_size": 63488 00:12:23.088 }, 00:12:23.088 { 00:12:23.088 "name": null, 00:12:23.088 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:23.088 "is_configured": false, 00:12:23.088 "data_offset": 2048, 00:12:23.088 "data_size": 63488 00:12:23.088 }, 00:12:23.088 { 00:12:23.088 "name": null, 00:12:23.088 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:23.088 "is_configured": false, 00:12:23.088 "data_offset": 2048, 00:12:23.088 "data_size": 63488 00:12:23.088 } 00:12:23.088 ] 00:12:23.088 }' 00:12:23.088 21:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:23.088 21:39:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.346 21:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:12:23.346 21:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:23.346 21:39:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.346 21:39:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.346 [2024-12-10 21:39:24.033621] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:23.346 [2024-12-10 21:39:24.033747] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:23.346 [2024-12-10 21:39:24.033786] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:12:23.346 [2024-12-10 21:39:24.033816] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:23.346 [2024-12-10 21:39:24.034313] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:23.346 [2024-12-10 21:39:24.034377] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:23.346 [2024-12-10 21:39:24.034498] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:23.346 [2024-12-10 21:39:24.034554] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:23.346 pt2 00:12:23.346 21:39:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.346 21:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:12:23.346 21:39:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.346 21:39:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.346 [2024-12-10 21:39:24.045603] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:12:23.346 21:39:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.346 21:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:12:23.346 21:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:23.346 21:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:23.346 21:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:23.346 21:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:23.346 21:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:23.346 21:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:23.347 21:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:23.347 21:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:23.347 21:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:23.347 21:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:23.347 21:39:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.347 21:39:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.347 21:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:23.347 21:39:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.347 21:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:23.347 "name": "raid_bdev1", 00:12:23.347 "uuid": "c840e7d2-2572-45e4-ba35-c4f857c812a2", 00:12:23.347 "strip_size_kb": 64, 00:12:23.347 "state": "configuring", 00:12:23.347 "raid_level": "raid0", 00:12:23.347 "superblock": true, 00:12:23.347 "num_base_bdevs": 4, 00:12:23.347 "num_base_bdevs_discovered": 1, 00:12:23.347 "num_base_bdevs_operational": 4, 00:12:23.347 "base_bdevs_list": [ 00:12:23.347 { 00:12:23.347 "name": "pt1", 00:12:23.347 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:23.347 "is_configured": true, 00:12:23.347 "data_offset": 2048, 00:12:23.347 "data_size": 63488 00:12:23.347 }, 00:12:23.347 { 00:12:23.347 "name": null, 00:12:23.347 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:23.347 "is_configured": false, 00:12:23.347 "data_offset": 0, 00:12:23.347 "data_size": 63488 00:12:23.347 }, 00:12:23.347 { 00:12:23.347 "name": null, 00:12:23.347 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:23.347 "is_configured": false, 00:12:23.347 "data_offset": 2048, 00:12:23.347 "data_size": 63488 00:12:23.347 }, 00:12:23.347 { 00:12:23.347 "name": null, 00:12:23.347 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:23.347 "is_configured": false, 00:12:23.347 "data_offset": 2048, 00:12:23.347 "data_size": 63488 00:12:23.347 } 00:12:23.347 ] 00:12:23.347 }' 00:12:23.347 21:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:23.347 21:39:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.912 21:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:12:23.912 21:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:23.912 21:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:23.912 21:39:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.912 21:39:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.912 [2024-12-10 21:39:24.448924] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:23.912 [2024-12-10 21:39:24.449000] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:23.912 [2024-12-10 21:39:24.449022] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:12:23.912 [2024-12-10 21:39:24.449032] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:23.912 [2024-12-10 21:39:24.449553] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:23.912 [2024-12-10 21:39:24.449572] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:23.912 [2024-12-10 21:39:24.449664] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:23.912 [2024-12-10 21:39:24.449693] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:23.912 pt2 00:12:23.912 21:39:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.912 21:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:23.912 21:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:23.912 21:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:23.912 21:39:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.912 21:39:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.912 [2024-12-10 21:39:24.460872] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:23.912 [2024-12-10 21:39:24.460920] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:23.913 [2024-12-10 21:39:24.460938] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:12:23.913 [2024-12-10 21:39:24.460947] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:23.913 [2024-12-10 21:39:24.461313] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:23.913 [2024-12-10 21:39:24.461329] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:23.913 [2024-12-10 21:39:24.461393] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:12:23.913 [2024-12-10 21:39:24.461430] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:23.913 pt3 00:12:23.913 21:39:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.913 21:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:23.913 21:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:23.913 21:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:12:23.913 21:39:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.913 21:39:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.913 [2024-12-10 21:39:24.472822] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:12:23.913 [2024-12-10 21:39:24.472862] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:23.913 [2024-12-10 21:39:24.472878] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:12:23.913 [2024-12-10 21:39:24.472887] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:23.913 [2024-12-10 21:39:24.473262] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:23.913 [2024-12-10 21:39:24.473283] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:12:23.913 [2024-12-10 21:39:24.473342] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:12:23.913 [2024-12-10 21:39:24.473361] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:12:23.913 [2024-12-10 21:39:24.473503] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:23.913 [2024-12-10 21:39:24.473517] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:12:23.913 [2024-12-10 21:39:24.473746] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:12:23.913 [2024-12-10 21:39:24.473884] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:23.913 [2024-12-10 21:39:24.473896] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:12:23.913 [2024-12-10 21:39:24.474028] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:23.913 pt4 00:12:23.913 21:39:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.913 21:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:23.913 21:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:23.913 21:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:12:23.913 21:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:23.913 21:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:23.913 21:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:23.913 21:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:23.913 21:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:23.913 21:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:23.913 21:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:23.913 21:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:23.913 21:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:23.913 21:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:23.913 21:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:23.913 21:39:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.913 21:39:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.913 21:39:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.913 21:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:23.913 "name": "raid_bdev1", 00:12:23.913 "uuid": "c840e7d2-2572-45e4-ba35-c4f857c812a2", 00:12:23.913 "strip_size_kb": 64, 00:12:23.913 "state": "online", 00:12:23.913 "raid_level": "raid0", 00:12:23.913 "superblock": true, 00:12:23.913 "num_base_bdevs": 4, 00:12:23.913 "num_base_bdevs_discovered": 4, 00:12:23.913 "num_base_bdevs_operational": 4, 00:12:23.913 "base_bdevs_list": [ 00:12:23.913 { 00:12:23.913 "name": "pt1", 00:12:23.913 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:23.913 "is_configured": true, 00:12:23.913 "data_offset": 2048, 00:12:23.913 "data_size": 63488 00:12:23.913 }, 00:12:23.913 { 00:12:23.913 "name": "pt2", 00:12:23.913 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:23.913 "is_configured": true, 00:12:23.913 "data_offset": 2048, 00:12:23.913 "data_size": 63488 00:12:23.913 }, 00:12:23.913 { 00:12:23.913 "name": "pt3", 00:12:23.913 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:23.913 "is_configured": true, 00:12:23.913 "data_offset": 2048, 00:12:23.913 "data_size": 63488 00:12:23.913 }, 00:12:23.913 { 00:12:23.913 "name": "pt4", 00:12:23.913 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:23.913 "is_configured": true, 00:12:23.913 "data_offset": 2048, 00:12:23.913 "data_size": 63488 00:12:23.913 } 00:12:23.913 ] 00:12:23.913 }' 00:12:23.913 21:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:23.913 21:39:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.171 21:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:12:24.171 21:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:12:24.171 21:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:24.171 21:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:24.171 21:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:24.171 21:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:24.171 21:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:24.171 21:39:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.171 21:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:24.171 21:39:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.171 [2024-12-10 21:39:24.936407] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:24.430 21:39:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.430 21:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:24.430 "name": "raid_bdev1", 00:12:24.430 "aliases": [ 00:12:24.430 "c840e7d2-2572-45e4-ba35-c4f857c812a2" 00:12:24.430 ], 00:12:24.430 "product_name": "Raid Volume", 00:12:24.430 "block_size": 512, 00:12:24.430 "num_blocks": 253952, 00:12:24.430 "uuid": "c840e7d2-2572-45e4-ba35-c4f857c812a2", 00:12:24.430 "assigned_rate_limits": { 00:12:24.430 "rw_ios_per_sec": 0, 00:12:24.430 "rw_mbytes_per_sec": 0, 00:12:24.430 "r_mbytes_per_sec": 0, 00:12:24.430 "w_mbytes_per_sec": 0 00:12:24.430 }, 00:12:24.430 "claimed": false, 00:12:24.430 "zoned": false, 00:12:24.430 "supported_io_types": { 00:12:24.430 "read": true, 00:12:24.430 "write": true, 00:12:24.430 "unmap": true, 00:12:24.430 "flush": true, 00:12:24.430 "reset": true, 00:12:24.430 "nvme_admin": false, 00:12:24.430 "nvme_io": false, 00:12:24.430 "nvme_io_md": false, 00:12:24.430 "write_zeroes": true, 00:12:24.430 "zcopy": false, 00:12:24.430 "get_zone_info": false, 00:12:24.430 "zone_management": false, 00:12:24.430 "zone_append": false, 00:12:24.430 "compare": false, 00:12:24.430 "compare_and_write": false, 00:12:24.430 "abort": false, 00:12:24.430 "seek_hole": false, 00:12:24.430 "seek_data": false, 00:12:24.430 "copy": false, 00:12:24.430 "nvme_iov_md": false 00:12:24.430 }, 00:12:24.430 "memory_domains": [ 00:12:24.430 { 00:12:24.430 "dma_device_id": "system", 00:12:24.430 "dma_device_type": 1 00:12:24.430 }, 00:12:24.430 { 00:12:24.430 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:24.430 "dma_device_type": 2 00:12:24.430 }, 00:12:24.430 { 00:12:24.430 "dma_device_id": "system", 00:12:24.430 "dma_device_type": 1 00:12:24.430 }, 00:12:24.430 { 00:12:24.431 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:24.431 "dma_device_type": 2 00:12:24.431 }, 00:12:24.431 { 00:12:24.431 "dma_device_id": "system", 00:12:24.431 "dma_device_type": 1 00:12:24.431 }, 00:12:24.431 { 00:12:24.431 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:24.431 "dma_device_type": 2 00:12:24.431 }, 00:12:24.431 { 00:12:24.431 "dma_device_id": "system", 00:12:24.431 "dma_device_type": 1 00:12:24.431 }, 00:12:24.431 { 00:12:24.431 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:24.431 "dma_device_type": 2 00:12:24.431 } 00:12:24.431 ], 00:12:24.431 "driver_specific": { 00:12:24.431 "raid": { 00:12:24.431 "uuid": "c840e7d2-2572-45e4-ba35-c4f857c812a2", 00:12:24.431 "strip_size_kb": 64, 00:12:24.431 "state": "online", 00:12:24.431 "raid_level": "raid0", 00:12:24.431 "superblock": true, 00:12:24.431 "num_base_bdevs": 4, 00:12:24.431 "num_base_bdevs_discovered": 4, 00:12:24.431 "num_base_bdevs_operational": 4, 00:12:24.431 "base_bdevs_list": [ 00:12:24.431 { 00:12:24.431 "name": "pt1", 00:12:24.431 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:24.431 "is_configured": true, 00:12:24.431 "data_offset": 2048, 00:12:24.431 "data_size": 63488 00:12:24.431 }, 00:12:24.431 { 00:12:24.431 "name": "pt2", 00:12:24.431 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:24.431 "is_configured": true, 00:12:24.431 "data_offset": 2048, 00:12:24.431 "data_size": 63488 00:12:24.431 }, 00:12:24.431 { 00:12:24.431 "name": "pt3", 00:12:24.431 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:24.431 "is_configured": true, 00:12:24.431 "data_offset": 2048, 00:12:24.431 "data_size": 63488 00:12:24.431 }, 00:12:24.431 { 00:12:24.431 "name": "pt4", 00:12:24.431 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:24.431 "is_configured": true, 00:12:24.431 "data_offset": 2048, 00:12:24.431 "data_size": 63488 00:12:24.431 } 00:12:24.431 ] 00:12:24.431 } 00:12:24.431 } 00:12:24.431 }' 00:12:24.431 21:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:24.431 21:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:12:24.431 pt2 00:12:24.431 pt3 00:12:24.431 pt4' 00:12:24.431 21:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:24.431 21:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:24.431 21:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:24.431 21:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:12:24.431 21:39:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.431 21:39:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.431 21:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:24.431 21:39:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.431 21:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:24.431 21:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:24.431 21:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:24.431 21:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:12:24.431 21:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:24.431 21:39:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.431 21:39:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.431 21:39:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.431 21:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:24.431 21:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:24.431 21:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:24.431 21:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:12:24.431 21:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:24.431 21:39:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.431 21:39:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.431 21:39:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.689 21:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:24.689 21:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:24.689 21:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:24.689 21:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:12:24.689 21:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:24.689 21:39:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.689 21:39:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.689 21:39:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.689 21:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:24.689 21:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:24.689 21:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:12:24.689 21:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:24.689 21:39:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.689 21:39:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.689 [2024-12-10 21:39:25.279867] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:24.689 21:39:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.689 21:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' c840e7d2-2572-45e4-ba35-c4f857c812a2 '!=' c840e7d2-2572-45e4-ba35-c4f857c812a2 ']' 00:12:24.689 21:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:12:24.689 21:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:24.689 21:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:12:24.689 21:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 70855 00:12:24.689 21:39:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 70855 ']' 00:12:24.689 21:39:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 70855 00:12:24.689 21:39:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:12:24.689 21:39:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:24.689 21:39:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70855 00:12:24.689 21:39:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:24.689 21:39:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:24.689 21:39:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70855' 00:12:24.689 killing process with pid 70855 00:12:24.689 21:39:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 70855 00:12:24.689 [2024-12-10 21:39:25.346245] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:24.689 [2024-12-10 21:39:25.346358] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:24.689 21:39:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 70855 00:12:24.689 [2024-12-10 21:39:25.346455] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:24.689 [2024-12-10 21:39:25.346466] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:12:25.255 [2024-12-10 21:39:25.772133] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:26.189 21:39:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:12:26.189 00:12:26.189 real 0m5.705s 00:12:26.189 user 0m8.122s 00:12:26.189 sys 0m0.960s 00:12:26.189 21:39:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:26.189 21:39:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.189 ************************************ 00:12:26.189 END TEST raid_superblock_test 00:12:26.189 ************************************ 00:12:26.448 21:39:27 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 4 read 00:12:26.448 21:39:27 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:26.448 21:39:27 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:26.448 21:39:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:26.448 ************************************ 00:12:26.448 START TEST raid_read_error_test 00:12:26.448 ************************************ 00:12:26.448 21:39:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 4 read 00:12:26.448 21:39:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:12:26.448 21:39:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:12:26.448 21:39:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:12:26.448 21:39:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:12:26.448 21:39:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:26.448 21:39:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:12:26.448 21:39:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:26.448 21:39:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:26.448 21:39:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:12:26.448 21:39:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:26.448 21:39:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:26.448 21:39:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:12:26.448 21:39:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:26.448 21:39:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:26.448 21:39:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:12:26.448 21:39:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:26.448 21:39:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:26.448 21:39:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:26.448 21:39:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:12:26.448 21:39:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:12:26.448 21:39:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:12:26.448 21:39:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:12:26.448 21:39:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:12:26.448 21:39:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:12:26.448 21:39:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:12:26.448 21:39:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:12:26.448 21:39:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:12:26.448 21:39:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:12:26.448 21:39:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.WtQquvHQrp 00:12:26.448 21:39:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=71116 00:12:26.448 21:39:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 71116 00:12:26.448 21:39:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 71116 ']' 00:12:26.448 21:39:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:26.448 21:39:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:26.448 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:26.448 21:39:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:26.448 21:39:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:26.448 21:39:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:26.448 21:39:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.448 [2024-12-10 21:39:27.129583] Starting SPDK v25.01-pre git sha1 cec5ba284 / DPDK 24.03.0 initialization... 00:12:26.448 [2024-12-10 21:39:27.129710] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71116 ] 00:12:26.706 [2024-12-10 21:39:27.302677] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:26.706 [2024-12-10 21:39:27.423133] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:12:26.965 [2024-12-10 21:39:27.624214] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:26.965 [2024-12-10 21:39:27.624289] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:27.573 21:39:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:27.573 21:39:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:12:27.573 21:39:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:27.573 21:39:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:27.573 21:39:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.573 21:39:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.573 BaseBdev1_malloc 00:12:27.573 21:39:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.573 21:39:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:12:27.573 21:39:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.573 21:39:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.573 true 00:12:27.573 21:39:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.573 21:39:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:27.573 21:39:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.574 21:39:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.574 [2024-12-10 21:39:28.100462] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:27.574 [2024-12-10 21:39:28.100530] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:27.574 [2024-12-10 21:39:28.100551] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:12:27.574 [2024-12-10 21:39:28.100563] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:27.574 [2024-12-10 21:39:28.102677] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:27.574 [2024-12-10 21:39:28.102711] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:27.574 BaseBdev1 00:12:27.574 21:39:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.574 21:39:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:27.574 21:39:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:27.574 21:39:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.574 21:39:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.574 BaseBdev2_malloc 00:12:27.574 21:39:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.574 21:39:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:12:27.574 21:39:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.574 21:39:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.574 true 00:12:27.574 21:39:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.574 21:39:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:27.574 21:39:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.574 21:39:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.574 [2024-12-10 21:39:28.164447] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:27.574 [2024-12-10 21:39:28.164504] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:27.574 [2024-12-10 21:39:28.164523] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:12:27.574 [2024-12-10 21:39:28.164535] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:27.574 [2024-12-10 21:39:28.166780] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:27.574 [2024-12-10 21:39:28.166822] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:27.574 BaseBdev2 00:12:27.574 21:39:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.574 21:39:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:27.574 21:39:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:27.574 21:39:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.574 21:39:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.574 BaseBdev3_malloc 00:12:27.574 21:39:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.574 21:39:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:12:27.574 21:39:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.574 21:39:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.574 true 00:12:27.574 21:39:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.574 21:39:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:12:27.574 21:39:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.574 21:39:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.574 [2024-12-10 21:39:28.238939] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:12:27.574 [2024-12-10 21:39:28.239003] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:27.574 [2024-12-10 21:39:28.239028] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:12:27.574 [2024-12-10 21:39:28.239040] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:27.574 [2024-12-10 21:39:28.241574] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:27.574 [2024-12-10 21:39:28.241609] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:27.574 BaseBdev3 00:12:27.574 21:39:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.574 21:39:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:27.574 21:39:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:12:27.574 21:39:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.574 21:39:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.574 BaseBdev4_malloc 00:12:27.574 21:39:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.574 21:39:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:12:27.574 21:39:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.574 21:39:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.574 true 00:12:27.574 21:39:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.574 21:39:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:12:27.574 21:39:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.574 21:39:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.574 [2024-12-10 21:39:28.307212] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:12:27.574 [2024-12-10 21:39:28.307270] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:27.574 [2024-12-10 21:39:28.307292] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:27.574 [2024-12-10 21:39:28.307304] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:27.574 [2024-12-10 21:39:28.309942] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:27.574 [2024-12-10 21:39:28.309986] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:12:27.574 BaseBdev4 00:12:27.574 21:39:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.574 21:39:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:12:27.574 21:39:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.574 21:39:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.574 [2024-12-10 21:39:28.319262] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:27.574 [2024-12-10 21:39:28.321351] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:27.574 [2024-12-10 21:39:28.321469] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:27.574 [2024-12-10 21:39:28.321544] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:27.574 [2024-12-10 21:39:28.321810] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:12:27.574 [2024-12-10 21:39:28.321839] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:12:27.574 [2024-12-10 21:39:28.322131] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:12:27.574 [2024-12-10 21:39:28.322332] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:12:27.574 [2024-12-10 21:39:28.322353] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:12:27.574 [2024-12-10 21:39:28.322562] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:27.574 21:39:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.574 21:39:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:12:27.574 21:39:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:27.574 21:39:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:27.574 21:39:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:27.574 21:39:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:27.574 21:39:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:27.574 21:39:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:27.574 21:39:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:27.574 21:39:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:27.574 21:39:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:27.574 21:39:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:27.574 21:39:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:27.574 21:39:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.574 21:39:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.574 21:39:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.833 21:39:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:27.833 "name": "raid_bdev1", 00:12:27.833 "uuid": "c308cf5a-b6d4-404e-bc27-3048eba02eb2", 00:12:27.833 "strip_size_kb": 64, 00:12:27.833 "state": "online", 00:12:27.833 "raid_level": "raid0", 00:12:27.833 "superblock": true, 00:12:27.833 "num_base_bdevs": 4, 00:12:27.833 "num_base_bdevs_discovered": 4, 00:12:27.833 "num_base_bdevs_operational": 4, 00:12:27.833 "base_bdevs_list": [ 00:12:27.833 { 00:12:27.833 "name": "BaseBdev1", 00:12:27.833 "uuid": "f916defa-9679-5ba1-87a9-2d3c1e83037d", 00:12:27.833 "is_configured": true, 00:12:27.833 "data_offset": 2048, 00:12:27.833 "data_size": 63488 00:12:27.833 }, 00:12:27.833 { 00:12:27.833 "name": "BaseBdev2", 00:12:27.833 "uuid": "382cbdd4-cab0-59d0-92ac-76d10ade3554", 00:12:27.833 "is_configured": true, 00:12:27.833 "data_offset": 2048, 00:12:27.833 "data_size": 63488 00:12:27.833 }, 00:12:27.833 { 00:12:27.833 "name": "BaseBdev3", 00:12:27.833 "uuid": "f2bd918f-8d09-55f9-b596-7c2fb666c573", 00:12:27.833 "is_configured": true, 00:12:27.833 "data_offset": 2048, 00:12:27.833 "data_size": 63488 00:12:27.833 }, 00:12:27.833 { 00:12:27.833 "name": "BaseBdev4", 00:12:27.833 "uuid": "e492a9f9-ffeb-5473-8af5-6e315c8d5a4d", 00:12:27.833 "is_configured": true, 00:12:27.833 "data_offset": 2048, 00:12:27.833 "data_size": 63488 00:12:27.833 } 00:12:27.833 ] 00:12:27.833 }' 00:12:27.833 21:39:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:27.833 21:39:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.091 21:39:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:12:28.091 21:39:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:28.350 [2024-12-10 21:39:28.887780] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:12:29.284 21:39:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:12:29.284 21:39:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.284 21:39:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.284 21:39:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.284 21:39:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:12:29.284 21:39:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:12:29.284 21:39:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:12:29.284 21:39:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:12:29.284 21:39:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:29.284 21:39:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:29.284 21:39:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:29.284 21:39:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:29.284 21:39:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:29.284 21:39:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:29.284 21:39:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:29.284 21:39:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:29.284 21:39:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:29.284 21:39:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:29.284 21:39:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.284 21:39:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.284 21:39:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:29.284 21:39:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.284 21:39:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:29.284 "name": "raid_bdev1", 00:12:29.284 "uuid": "c308cf5a-b6d4-404e-bc27-3048eba02eb2", 00:12:29.284 "strip_size_kb": 64, 00:12:29.284 "state": "online", 00:12:29.284 "raid_level": "raid0", 00:12:29.284 "superblock": true, 00:12:29.284 "num_base_bdevs": 4, 00:12:29.284 "num_base_bdevs_discovered": 4, 00:12:29.284 "num_base_bdevs_operational": 4, 00:12:29.284 "base_bdevs_list": [ 00:12:29.284 { 00:12:29.284 "name": "BaseBdev1", 00:12:29.284 "uuid": "f916defa-9679-5ba1-87a9-2d3c1e83037d", 00:12:29.284 "is_configured": true, 00:12:29.284 "data_offset": 2048, 00:12:29.284 "data_size": 63488 00:12:29.284 }, 00:12:29.284 { 00:12:29.284 "name": "BaseBdev2", 00:12:29.284 "uuid": "382cbdd4-cab0-59d0-92ac-76d10ade3554", 00:12:29.284 "is_configured": true, 00:12:29.284 "data_offset": 2048, 00:12:29.284 "data_size": 63488 00:12:29.284 }, 00:12:29.284 { 00:12:29.284 "name": "BaseBdev3", 00:12:29.284 "uuid": "f2bd918f-8d09-55f9-b596-7c2fb666c573", 00:12:29.284 "is_configured": true, 00:12:29.284 "data_offset": 2048, 00:12:29.284 "data_size": 63488 00:12:29.284 }, 00:12:29.284 { 00:12:29.284 "name": "BaseBdev4", 00:12:29.284 "uuid": "e492a9f9-ffeb-5473-8af5-6e315c8d5a4d", 00:12:29.284 "is_configured": true, 00:12:29.284 "data_offset": 2048, 00:12:29.284 "data_size": 63488 00:12:29.284 } 00:12:29.284 ] 00:12:29.284 }' 00:12:29.284 21:39:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:29.284 21:39:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.542 21:39:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:29.542 21:39:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.542 21:39:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.542 [2024-12-10 21:39:30.288817] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:29.542 [2024-12-10 21:39:30.288856] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:29.542 [2024-12-10 21:39:30.292079] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:29.542 [2024-12-10 21:39:30.292153] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:29.542 [2024-12-10 21:39:30.292202] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:29.542 [2024-12-10 21:39:30.292220] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:12:29.542 { 00:12:29.542 "results": [ 00:12:29.542 { 00:12:29.542 "job": "raid_bdev1", 00:12:29.542 "core_mask": "0x1", 00:12:29.542 "workload": "randrw", 00:12:29.542 "percentage": 50, 00:12:29.542 "status": "finished", 00:12:29.542 "queue_depth": 1, 00:12:29.542 "io_size": 131072, 00:12:29.542 "runtime": 1.401726, 00:12:29.542 "iops": 13739.489743359258, 00:12:29.542 "mibps": 1717.4362179199072, 00:12:29.542 "io_failed": 1, 00:12:29.542 "io_timeout": 0, 00:12:29.542 "avg_latency_us": 100.61846395225982, 00:12:29.542 "min_latency_us": 29.065502183406114, 00:12:29.542 "max_latency_us": 1516.7720524017468 00:12:29.542 } 00:12:29.542 ], 00:12:29.542 "core_count": 1 00:12:29.542 } 00:12:29.542 21:39:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.542 21:39:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 71116 00:12:29.542 21:39:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 71116 ']' 00:12:29.542 21:39:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 71116 00:12:29.542 21:39:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:12:29.542 21:39:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:29.542 21:39:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71116 00:12:29.800 21:39:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:29.800 21:39:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:29.800 21:39:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71116' 00:12:29.800 killing process with pid 71116 00:12:29.800 21:39:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 71116 00:12:29.800 [2024-12-10 21:39:30.336034] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:29.800 21:39:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 71116 00:12:30.058 [2024-12-10 21:39:30.695451] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:31.435 21:39:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:12:31.435 21:39:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.WtQquvHQrp 00:12:31.435 21:39:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:12:31.435 21:39:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:12:31.435 21:39:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:12:31.435 21:39:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:31.435 21:39:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:12:31.435 21:39:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:12:31.435 00:12:31.435 real 0m4.925s 00:12:31.435 user 0m5.917s 00:12:31.435 sys 0m0.563s 00:12:31.435 21:39:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:31.435 21:39:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.435 ************************************ 00:12:31.435 END TEST raid_read_error_test 00:12:31.435 ************************************ 00:12:31.435 21:39:32 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 4 write 00:12:31.435 21:39:32 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:31.435 21:39:32 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:31.435 21:39:32 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:31.435 ************************************ 00:12:31.435 START TEST raid_write_error_test 00:12:31.435 ************************************ 00:12:31.435 21:39:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 4 write 00:12:31.435 21:39:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:12:31.435 21:39:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:12:31.435 21:39:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:12:31.435 21:39:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:12:31.435 21:39:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:31.435 21:39:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:12:31.435 21:39:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:31.435 21:39:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:31.435 21:39:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:12:31.435 21:39:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:31.435 21:39:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:31.435 21:39:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:12:31.435 21:39:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:31.435 21:39:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:31.435 21:39:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:12:31.435 21:39:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:31.435 21:39:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:31.435 21:39:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:31.435 21:39:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:12:31.435 21:39:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:12:31.435 21:39:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:12:31.435 21:39:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:12:31.435 21:39:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:12:31.435 21:39:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:12:31.435 21:39:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:12:31.435 21:39:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:12:31.435 21:39:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:12:31.435 21:39:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:12:31.435 21:39:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.0HxpS7BwaB 00:12:31.435 21:39:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=71267 00:12:31.435 21:39:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:31.435 21:39:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 71267 00:12:31.435 21:39:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 71267 ']' 00:12:31.435 21:39:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:31.435 21:39:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:31.435 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:31.435 21:39:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:31.435 21:39:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:31.435 21:39:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.435 [2024-12-10 21:39:32.132730] Starting SPDK v25.01-pre git sha1 cec5ba284 / DPDK 24.03.0 initialization... 00:12:31.435 [2024-12-10 21:39:32.132850] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71267 ] 00:12:31.694 [2024-12-10 21:39:32.303099] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:31.694 [2024-12-10 21:39:32.425424] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:12:31.954 [2024-12-10 21:39:32.628467] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:31.954 [2024-12-10 21:39:32.628539] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:32.224 21:39:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:32.224 21:39:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:12:32.224 21:39:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:32.224 21:39:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:32.224 21:39:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.224 21:39:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.484 BaseBdev1_malloc 00:12:32.484 21:39:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.484 21:39:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:12:32.484 21:39:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.484 21:39:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.484 true 00:12:32.484 21:39:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.484 21:39:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:32.484 21:39:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.484 21:39:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.484 [2024-12-10 21:39:33.033054] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:32.484 [2024-12-10 21:39:33.033109] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:32.484 [2024-12-10 21:39:33.033129] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:12:32.484 [2024-12-10 21:39:33.033141] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:32.484 [2024-12-10 21:39:33.035227] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:32.484 [2024-12-10 21:39:33.035263] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:32.484 BaseBdev1 00:12:32.484 21:39:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.484 21:39:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:32.484 21:39:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:32.484 21:39:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.484 21:39:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.484 BaseBdev2_malloc 00:12:32.484 21:39:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.484 21:39:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:12:32.484 21:39:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.484 21:39:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.484 true 00:12:32.484 21:39:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.484 21:39:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:32.484 21:39:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.484 21:39:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.484 [2024-12-10 21:39:33.101052] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:32.484 [2024-12-10 21:39:33.101105] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:32.484 [2024-12-10 21:39:33.101123] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:12:32.484 [2024-12-10 21:39:33.101133] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:32.484 [2024-12-10 21:39:33.103237] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:32.484 [2024-12-10 21:39:33.103269] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:32.484 BaseBdev2 00:12:32.484 21:39:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.484 21:39:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:32.484 21:39:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:32.484 21:39:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.484 21:39:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.484 BaseBdev3_malloc 00:12:32.484 21:39:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.484 21:39:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:12:32.484 21:39:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.484 21:39:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.484 true 00:12:32.484 21:39:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.484 21:39:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:12:32.484 21:39:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.484 21:39:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.484 [2024-12-10 21:39:33.180634] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:12:32.484 [2024-12-10 21:39:33.180697] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:32.484 [2024-12-10 21:39:33.180725] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:12:32.484 [2024-12-10 21:39:33.180741] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:32.484 [2024-12-10 21:39:33.183946] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:32.484 [2024-12-10 21:39:33.183990] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:32.484 BaseBdev3 00:12:32.484 21:39:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.484 21:39:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:32.484 21:39:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:12:32.484 21:39:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.484 21:39:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.484 BaseBdev4_malloc 00:12:32.484 21:39:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.484 21:39:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:12:32.484 21:39:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.484 21:39:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.484 true 00:12:32.484 21:39:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.484 21:39:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:12:32.484 21:39:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.484 21:39:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.484 [2024-12-10 21:39:33.246640] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:12:32.484 [2024-12-10 21:39:33.246703] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:32.484 [2024-12-10 21:39:33.246728] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:32.484 [2024-12-10 21:39:33.246742] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:32.484 [2024-12-10 21:39:33.249951] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:32.484 [2024-12-10 21:39:33.249992] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:12:32.484 BaseBdev4 00:12:32.484 21:39:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.484 21:39:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:12:32.484 21:39:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.484 21:39:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.484 [2024-12-10 21:39:33.258712] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:32.484 [2024-12-10 21:39:33.260520] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:32.484 [2024-12-10 21:39:33.260626] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:32.484 [2024-12-10 21:39:33.260688] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:32.484 [2024-12-10 21:39:33.260902] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:12:32.484 [2024-12-10 21:39:33.260923] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:12:32.485 [2024-12-10 21:39:33.261158] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:12:32.485 [2024-12-10 21:39:33.261329] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:12:32.485 [2024-12-10 21:39:33.261347] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:12:32.485 [2024-12-10 21:39:33.261524] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:32.485 21:39:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.485 21:39:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:12:32.744 21:39:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:32.744 21:39:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:32.744 21:39:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:32.744 21:39:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:32.744 21:39:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:32.744 21:39:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:32.744 21:39:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:32.744 21:39:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:32.744 21:39:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:32.744 21:39:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:32.744 21:39:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.744 21:39:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.744 21:39:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:32.744 21:39:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.744 21:39:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:32.744 "name": "raid_bdev1", 00:12:32.744 "uuid": "595370c6-179a-44c8-8f58-37a19bc0e05f", 00:12:32.744 "strip_size_kb": 64, 00:12:32.744 "state": "online", 00:12:32.744 "raid_level": "raid0", 00:12:32.744 "superblock": true, 00:12:32.744 "num_base_bdevs": 4, 00:12:32.744 "num_base_bdevs_discovered": 4, 00:12:32.744 "num_base_bdevs_operational": 4, 00:12:32.744 "base_bdevs_list": [ 00:12:32.744 { 00:12:32.745 "name": "BaseBdev1", 00:12:32.745 "uuid": "57ae00f8-8cfd-55d1-a628-c35919aa97e2", 00:12:32.745 "is_configured": true, 00:12:32.745 "data_offset": 2048, 00:12:32.745 "data_size": 63488 00:12:32.745 }, 00:12:32.745 { 00:12:32.745 "name": "BaseBdev2", 00:12:32.745 "uuid": "35842053-e46d-516a-9f64-bae98298bf33", 00:12:32.745 "is_configured": true, 00:12:32.745 "data_offset": 2048, 00:12:32.745 "data_size": 63488 00:12:32.745 }, 00:12:32.745 { 00:12:32.745 "name": "BaseBdev3", 00:12:32.745 "uuid": "5d1f8792-261a-579e-94fe-db006378aecb", 00:12:32.745 "is_configured": true, 00:12:32.745 "data_offset": 2048, 00:12:32.745 "data_size": 63488 00:12:32.745 }, 00:12:32.745 { 00:12:32.745 "name": "BaseBdev4", 00:12:32.745 "uuid": "746064d7-a8f0-5457-a626-a57042218469", 00:12:32.745 "is_configured": true, 00:12:32.745 "data_offset": 2048, 00:12:32.745 "data_size": 63488 00:12:32.745 } 00:12:32.745 ] 00:12:32.745 }' 00:12:32.745 21:39:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:32.745 21:39:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.004 21:39:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:12:33.004 21:39:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:33.263 [2024-12-10 21:39:33.827269] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:12:34.202 21:39:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:12:34.202 21:39:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.202 21:39:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.202 21:39:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.202 21:39:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:12:34.202 21:39:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:12:34.202 21:39:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:12:34.202 21:39:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:12:34.202 21:39:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:34.202 21:39:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:34.202 21:39:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:34.202 21:39:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:34.202 21:39:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:34.202 21:39:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:34.202 21:39:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:34.202 21:39:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:34.202 21:39:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:34.202 21:39:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:34.202 21:39:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:34.202 21:39:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.202 21:39:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.202 21:39:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.202 21:39:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:34.202 "name": "raid_bdev1", 00:12:34.202 "uuid": "595370c6-179a-44c8-8f58-37a19bc0e05f", 00:12:34.202 "strip_size_kb": 64, 00:12:34.202 "state": "online", 00:12:34.202 "raid_level": "raid0", 00:12:34.202 "superblock": true, 00:12:34.202 "num_base_bdevs": 4, 00:12:34.202 "num_base_bdevs_discovered": 4, 00:12:34.202 "num_base_bdevs_operational": 4, 00:12:34.202 "base_bdevs_list": [ 00:12:34.202 { 00:12:34.202 "name": "BaseBdev1", 00:12:34.202 "uuid": "57ae00f8-8cfd-55d1-a628-c35919aa97e2", 00:12:34.202 "is_configured": true, 00:12:34.202 "data_offset": 2048, 00:12:34.202 "data_size": 63488 00:12:34.202 }, 00:12:34.202 { 00:12:34.202 "name": "BaseBdev2", 00:12:34.202 "uuid": "35842053-e46d-516a-9f64-bae98298bf33", 00:12:34.202 "is_configured": true, 00:12:34.202 "data_offset": 2048, 00:12:34.202 "data_size": 63488 00:12:34.202 }, 00:12:34.202 { 00:12:34.202 "name": "BaseBdev3", 00:12:34.202 "uuid": "5d1f8792-261a-579e-94fe-db006378aecb", 00:12:34.202 "is_configured": true, 00:12:34.202 "data_offset": 2048, 00:12:34.202 "data_size": 63488 00:12:34.202 }, 00:12:34.202 { 00:12:34.202 "name": "BaseBdev4", 00:12:34.202 "uuid": "746064d7-a8f0-5457-a626-a57042218469", 00:12:34.202 "is_configured": true, 00:12:34.202 "data_offset": 2048, 00:12:34.202 "data_size": 63488 00:12:34.202 } 00:12:34.202 ] 00:12:34.202 }' 00:12:34.202 21:39:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:34.202 21:39:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.460 21:39:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:34.460 21:39:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.460 21:39:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.460 [2024-12-10 21:39:35.235968] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:34.460 [2024-12-10 21:39:35.236007] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:34.460 [2024-12-10 21:39:35.238765] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:34.460 [2024-12-10 21:39:35.238829] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:34.460 [2024-12-10 21:39:35.238873] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:34.460 [2024-12-10 21:39:35.238885] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:12:34.460 21:39:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.460 { 00:12:34.460 "results": [ 00:12:34.460 { 00:12:34.460 "job": "raid_bdev1", 00:12:34.460 "core_mask": "0x1", 00:12:34.460 "workload": "randrw", 00:12:34.460 "percentage": 50, 00:12:34.460 "status": "finished", 00:12:34.460 "queue_depth": 1, 00:12:34.460 "io_size": 131072, 00:12:34.460 "runtime": 1.409509, 00:12:34.460 "iops": 14301.434045472572, 00:12:34.460 "mibps": 1787.6792556840714, 00:12:34.460 "io_failed": 1, 00:12:34.460 "io_timeout": 0, 00:12:34.460 "avg_latency_us": 96.88000344856643, 00:12:34.460 "min_latency_us": 27.50043668122271, 00:12:34.460 "max_latency_us": 1459.5353711790392 00:12:34.460 } 00:12:34.460 ], 00:12:34.460 "core_count": 1 00:12:34.460 } 00:12:34.460 21:39:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 71267 00:12:34.719 21:39:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 71267 ']' 00:12:34.719 21:39:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 71267 00:12:34.719 21:39:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:12:34.719 21:39:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:34.719 21:39:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71267 00:12:34.719 21:39:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:34.719 21:39:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:34.719 killing process with pid 71267 00:12:34.719 21:39:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71267' 00:12:34.719 21:39:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 71267 00:12:34.719 [2024-12-10 21:39:35.280194] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:34.719 21:39:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 71267 00:12:34.978 [2024-12-10 21:39:35.644871] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:36.357 21:39:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.0HxpS7BwaB 00:12:36.357 21:39:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:12:36.357 21:39:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:12:36.357 21:39:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:12:36.357 21:39:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:12:36.357 21:39:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:36.357 21:39:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:12:36.357 21:39:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:12:36.357 00:12:36.357 real 0m4.873s 00:12:36.357 user 0m5.755s 00:12:36.357 sys 0m0.613s 00:12:36.357 21:39:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:36.357 21:39:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.357 ************************************ 00:12:36.357 END TEST raid_write_error_test 00:12:36.357 ************************************ 00:12:36.357 21:39:36 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:12:36.357 21:39:36 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 4 false 00:12:36.357 21:39:36 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:36.357 21:39:36 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:36.357 21:39:36 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:36.357 ************************************ 00:12:36.357 START TEST raid_state_function_test 00:12:36.357 ************************************ 00:12:36.357 21:39:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 4 false 00:12:36.357 21:39:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:12:36.357 21:39:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:12:36.357 21:39:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:12:36.357 21:39:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:12:36.357 21:39:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:12:36.357 21:39:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:36.357 21:39:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:12:36.357 21:39:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:36.357 21:39:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:36.357 21:39:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:12:36.357 21:39:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:36.357 21:39:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:36.357 21:39:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:12:36.357 21:39:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:36.357 21:39:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:36.357 21:39:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:12:36.357 21:39:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:36.357 21:39:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:36.357 21:39:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:36.357 21:39:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:12:36.357 21:39:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:12:36.357 21:39:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:12:36.357 21:39:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:12:36.357 21:39:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:12:36.357 21:39:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:12:36.357 21:39:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:12:36.358 21:39:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:12:36.358 21:39:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:12:36.358 21:39:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:12:36.358 21:39:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=71414 00:12:36.358 21:39:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:12:36.358 21:39:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 71414' 00:12:36.358 Process raid pid: 71414 00:12:36.358 21:39:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 71414 00:12:36.358 21:39:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 71414 ']' 00:12:36.358 21:39:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:36.358 21:39:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:36.358 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:36.358 21:39:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:36.358 21:39:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:36.358 21:39:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.358 [2024-12-10 21:39:37.070470] Starting SPDK v25.01-pre git sha1 cec5ba284 / DPDK 24.03.0 initialization... 00:12:36.358 [2024-12-10 21:39:37.070599] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:36.618 [2024-12-10 21:39:37.246437] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:36.618 [2024-12-10 21:39:37.368963] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:12:36.876 [2024-12-10 21:39:37.593578] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:36.876 [2024-12-10 21:39:37.593629] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:37.445 21:39:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:37.445 21:39:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:12:37.445 21:39:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:37.445 21:39:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.445 21:39:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.446 [2024-12-10 21:39:37.928616] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:37.446 [2024-12-10 21:39:37.928668] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:37.446 [2024-12-10 21:39:37.928696] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:37.446 [2024-12-10 21:39:37.928707] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:37.446 [2024-12-10 21:39:37.928714] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:37.446 [2024-12-10 21:39:37.928724] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:37.446 [2024-12-10 21:39:37.928731] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:37.446 [2024-12-10 21:39:37.928740] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:37.446 21:39:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.446 21:39:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:37.446 21:39:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:37.446 21:39:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:37.446 21:39:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:37.446 21:39:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:37.446 21:39:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:37.446 21:39:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:37.446 21:39:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:37.446 21:39:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:37.446 21:39:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:37.446 21:39:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:37.446 21:39:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.446 21:39:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.446 21:39:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:37.446 21:39:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.446 21:39:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:37.446 "name": "Existed_Raid", 00:12:37.446 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:37.446 "strip_size_kb": 64, 00:12:37.446 "state": "configuring", 00:12:37.446 "raid_level": "concat", 00:12:37.446 "superblock": false, 00:12:37.446 "num_base_bdevs": 4, 00:12:37.446 "num_base_bdevs_discovered": 0, 00:12:37.446 "num_base_bdevs_operational": 4, 00:12:37.446 "base_bdevs_list": [ 00:12:37.446 { 00:12:37.446 "name": "BaseBdev1", 00:12:37.446 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:37.446 "is_configured": false, 00:12:37.446 "data_offset": 0, 00:12:37.446 "data_size": 0 00:12:37.446 }, 00:12:37.446 { 00:12:37.446 "name": "BaseBdev2", 00:12:37.446 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:37.446 "is_configured": false, 00:12:37.446 "data_offset": 0, 00:12:37.446 "data_size": 0 00:12:37.446 }, 00:12:37.446 { 00:12:37.446 "name": "BaseBdev3", 00:12:37.446 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:37.446 "is_configured": false, 00:12:37.446 "data_offset": 0, 00:12:37.446 "data_size": 0 00:12:37.446 }, 00:12:37.446 { 00:12:37.446 "name": "BaseBdev4", 00:12:37.446 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:37.446 "is_configured": false, 00:12:37.446 "data_offset": 0, 00:12:37.446 "data_size": 0 00:12:37.446 } 00:12:37.446 ] 00:12:37.446 }' 00:12:37.446 21:39:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:37.446 21:39:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.704 21:39:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:37.704 21:39:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.704 21:39:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.704 [2024-12-10 21:39:38.391833] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:37.704 [2024-12-10 21:39:38.391886] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:12:37.704 21:39:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.704 21:39:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:37.704 21:39:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.704 21:39:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.704 [2024-12-10 21:39:38.399775] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:37.704 [2024-12-10 21:39:38.399817] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:37.704 [2024-12-10 21:39:38.399843] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:37.704 [2024-12-10 21:39:38.399854] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:37.704 [2024-12-10 21:39:38.399861] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:37.704 [2024-12-10 21:39:38.399870] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:37.704 [2024-12-10 21:39:38.399877] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:37.704 [2024-12-10 21:39:38.399886] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:37.704 21:39:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.704 21:39:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:37.704 21:39:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.704 21:39:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.704 [2024-12-10 21:39:38.444913] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:37.704 BaseBdev1 00:12:37.704 21:39:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.704 21:39:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:12:37.705 21:39:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:12:37.705 21:39:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:37.705 21:39:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:37.705 21:39:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:37.705 21:39:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:37.705 21:39:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:37.705 21:39:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.705 21:39:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.705 21:39:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.705 21:39:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:37.705 21:39:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.705 21:39:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.705 [ 00:12:37.705 { 00:12:37.705 "name": "BaseBdev1", 00:12:37.705 "aliases": [ 00:12:37.705 "3abde4a3-e2cb-4886-8d51-4862e1708bc1" 00:12:37.705 ], 00:12:37.705 "product_name": "Malloc disk", 00:12:37.705 "block_size": 512, 00:12:37.705 "num_blocks": 65536, 00:12:37.705 "uuid": "3abde4a3-e2cb-4886-8d51-4862e1708bc1", 00:12:37.705 "assigned_rate_limits": { 00:12:37.705 "rw_ios_per_sec": 0, 00:12:37.705 "rw_mbytes_per_sec": 0, 00:12:37.705 "r_mbytes_per_sec": 0, 00:12:37.705 "w_mbytes_per_sec": 0 00:12:37.705 }, 00:12:37.705 "claimed": true, 00:12:37.705 "claim_type": "exclusive_write", 00:12:37.705 "zoned": false, 00:12:37.705 "supported_io_types": { 00:12:37.705 "read": true, 00:12:37.705 "write": true, 00:12:37.705 "unmap": true, 00:12:37.705 "flush": true, 00:12:37.705 "reset": true, 00:12:37.705 "nvme_admin": false, 00:12:37.705 "nvme_io": false, 00:12:37.705 "nvme_io_md": false, 00:12:37.705 "write_zeroes": true, 00:12:37.705 "zcopy": true, 00:12:37.705 "get_zone_info": false, 00:12:37.705 "zone_management": false, 00:12:37.705 "zone_append": false, 00:12:37.705 "compare": false, 00:12:37.705 "compare_and_write": false, 00:12:37.705 "abort": true, 00:12:37.705 "seek_hole": false, 00:12:37.705 "seek_data": false, 00:12:37.705 "copy": true, 00:12:37.705 "nvme_iov_md": false 00:12:37.705 }, 00:12:37.705 "memory_domains": [ 00:12:37.705 { 00:12:37.705 "dma_device_id": "system", 00:12:37.705 "dma_device_type": 1 00:12:37.705 }, 00:12:37.705 { 00:12:37.705 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:37.705 "dma_device_type": 2 00:12:37.705 } 00:12:37.705 ], 00:12:37.705 "driver_specific": {} 00:12:37.705 } 00:12:37.705 ] 00:12:37.705 21:39:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.705 21:39:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:37.705 21:39:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:37.705 21:39:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:37.705 21:39:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:37.705 21:39:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:37.705 21:39:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:37.705 21:39:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:37.705 21:39:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:37.705 21:39:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:37.705 21:39:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:37.705 21:39:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:37.963 21:39:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:37.963 21:39:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:37.963 21:39:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.963 21:39:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.963 21:39:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.963 21:39:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:37.963 "name": "Existed_Raid", 00:12:37.963 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:37.963 "strip_size_kb": 64, 00:12:37.963 "state": "configuring", 00:12:37.963 "raid_level": "concat", 00:12:37.963 "superblock": false, 00:12:37.963 "num_base_bdevs": 4, 00:12:37.963 "num_base_bdevs_discovered": 1, 00:12:37.963 "num_base_bdevs_operational": 4, 00:12:37.963 "base_bdevs_list": [ 00:12:37.963 { 00:12:37.963 "name": "BaseBdev1", 00:12:37.963 "uuid": "3abde4a3-e2cb-4886-8d51-4862e1708bc1", 00:12:37.963 "is_configured": true, 00:12:37.963 "data_offset": 0, 00:12:37.963 "data_size": 65536 00:12:37.963 }, 00:12:37.963 { 00:12:37.963 "name": "BaseBdev2", 00:12:37.963 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:37.963 "is_configured": false, 00:12:37.963 "data_offset": 0, 00:12:37.963 "data_size": 0 00:12:37.963 }, 00:12:37.963 { 00:12:37.963 "name": "BaseBdev3", 00:12:37.963 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:37.963 "is_configured": false, 00:12:37.963 "data_offset": 0, 00:12:37.963 "data_size": 0 00:12:37.963 }, 00:12:37.963 { 00:12:37.963 "name": "BaseBdev4", 00:12:37.963 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:37.963 "is_configured": false, 00:12:37.963 "data_offset": 0, 00:12:37.963 "data_size": 0 00:12:37.963 } 00:12:37.963 ] 00:12:37.963 }' 00:12:37.963 21:39:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:37.963 21:39:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.222 21:39:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:38.222 21:39:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.222 21:39:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.222 [2024-12-10 21:39:38.936152] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:38.222 [2024-12-10 21:39:38.936240] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:12:38.222 21:39:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.222 21:39:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:38.222 21:39:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.222 21:39:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.222 [2024-12-10 21:39:38.948188] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:38.222 [2024-12-10 21:39:38.950218] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:38.222 [2024-12-10 21:39:38.950261] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:38.222 [2024-12-10 21:39:38.950271] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:38.222 [2024-12-10 21:39:38.950281] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:38.222 [2024-12-10 21:39:38.950288] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:38.222 [2024-12-10 21:39:38.950297] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:38.222 21:39:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.222 21:39:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:12:38.222 21:39:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:38.222 21:39:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:38.222 21:39:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:38.222 21:39:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:38.222 21:39:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:38.222 21:39:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:38.222 21:39:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:38.222 21:39:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:38.222 21:39:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:38.222 21:39:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:38.222 21:39:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:38.222 21:39:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:38.222 21:39:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.222 21:39:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.222 21:39:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:38.222 21:39:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.481 21:39:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:38.481 "name": "Existed_Raid", 00:12:38.481 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:38.481 "strip_size_kb": 64, 00:12:38.481 "state": "configuring", 00:12:38.481 "raid_level": "concat", 00:12:38.481 "superblock": false, 00:12:38.481 "num_base_bdevs": 4, 00:12:38.481 "num_base_bdevs_discovered": 1, 00:12:38.481 "num_base_bdevs_operational": 4, 00:12:38.481 "base_bdevs_list": [ 00:12:38.481 { 00:12:38.481 "name": "BaseBdev1", 00:12:38.481 "uuid": "3abde4a3-e2cb-4886-8d51-4862e1708bc1", 00:12:38.481 "is_configured": true, 00:12:38.481 "data_offset": 0, 00:12:38.481 "data_size": 65536 00:12:38.481 }, 00:12:38.481 { 00:12:38.481 "name": "BaseBdev2", 00:12:38.481 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:38.481 "is_configured": false, 00:12:38.481 "data_offset": 0, 00:12:38.481 "data_size": 0 00:12:38.481 }, 00:12:38.481 { 00:12:38.481 "name": "BaseBdev3", 00:12:38.481 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:38.481 "is_configured": false, 00:12:38.481 "data_offset": 0, 00:12:38.481 "data_size": 0 00:12:38.481 }, 00:12:38.481 { 00:12:38.481 "name": "BaseBdev4", 00:12:38.481 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:38.481 "is_configured": false, 00:12:38.481 "data_offset": 0, 00:12:38.481 "data_size": 0 00:12:38.481 } 00:12:38.481 ] 00:12:38.481 }' 00:12:38.481 21:39:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:38.481 21:39:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.739 21:39:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:38.739 21:39:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.739 21:39:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.739 [2024-12-10 21:39:39.432087] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:38.739 BaseBdev2 00:12:38.739 21:39:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.739 21:39:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:12:38.740 21:39:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:12:38.740 21:39:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:38.740 21:39:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:38.740 21:39:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:38.740 21:39:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:38.740 21:39:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:38.740 21:39:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.740 21:39:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.740 21:39:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.740 21:39:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:38.740 21:39:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.740 21:39:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.740 [ 00:12:38.740 { 00:12:38.740 "name": "BaseBdev2", 00:12:38.740 "aliases": [ 00:12:38.740 "733b8f70-7980-42d9-b1fc-5a32b2463154" 00:12:38.740 ], 00:12:38.740 "product_name": "Malloc disk", 00:12:38.740 "block_size": 512, 00:12:38.740 "num_blocks": 65536, 00:12:38.740 "uuid": "733b8f70-7980-42d9-b1fc-5a32b2463154", 00:12:38.740 "assigned_rate_limits": { 00:12:38.740 "rw_ios_per_sec": 0, 00:12:38.740 "rw_mbytes_per_sec": 0, 00:12:38.740 "r_mbytes_per_sec": 0, 00:12:38.740 "w_mbytes_per_sec": 0 00:12:38.740 }, 00:12:38.740 "claimed": true, 00:12:38.740 "claim_type": "exclusive_write", 00:12:38.740 "zoned": false, 00:12:38.740 "supported_io_types": { 00:12:38.740 "read": true, 00:12:38.740 "write": true, 00:12:38.740 "unmap": true, 00:12:38.740 "flush": true, 00:12:38.740 "reset": true, 00:12:38.740 "nvme_admin": false, 00:12:38.740 "nvme_io": false, 00:12:38.740 "nvme_io_md": false, 00:12:38.740 "write_zeroes": true, 00:12:38.740 "zcopy": true, 00:12:38.740 "get_zone_info": false, 00:12:38.740 "zone_management": false, 00:12:38.740 "zone_append": false, 00:12:38.740 "compare": false, 00:12:38.740 "compare_and_write": false, 00:12:38.740 "abort": true, 00:12:38.740 "seek_hole": false, 00:12:38.740 "seek_data": false, 00:12:38.740 "copy": true, 00:12:38.740 "nvme_iov_md": false 00:12:38.740 }, 00:12:38.740 "memory_domains": [ 00:12:38.740 { 00:12:38.740 "dma_device_id": "system", 00:12:38.740 "dma_device_type": 1 00:12:38.740 }, 00:12:38.740 { 00:12:38.740 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:38.740 "dma_device_type": 2 00:12:38.740 } 00:12:38.740 ], 00:12:38.740 "driver_specific": {} 00:12:38.740 } 00:12:38.740 ] 00:12:38.740 21:39:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.740 21:39:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:38.740 21:39:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:38.740 21:39:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:38.740 21:39:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:38.740 21:39:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:38.740 21:39:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:38.740 21:39:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:38.740 21:39:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:38.740 21:39:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:38.740 21:39:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:38.740 21:39:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:38.740 21:39:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:38.740 21:39:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:38.740 21:39:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:38.740 21:39:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:38.740 21:39:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.740 21:39:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.740 21:39:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.740 21:39:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:38.740 "name": "Existed_Raid", 00:12:38.740 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:38.740 "strip_size_kb": 64, 00:12:38.740 "state": "configuring", 00:12:38.740 "raid_level": "concat", 00:12:38.740 "superblock": false, 00:12:38.740 "num_base_bdevs": 4, 00:12:38.740 "num_base_bdevs_discovered": 2, 00:12:38.740 "num_base_bdevs_operational": 4, 00:12:38.740 "base_bdevs_list": [ 00:12:38.740 { 00:12:38.740 "name": "BaseBdev1", 00:12:38.740 "uuid": "3abde4a3-e2cb-4886-8d51-4862e1708bc1", 00:12:38.740 "is_configured": true, 00:12:38.740 "data_offset": 0, 00:12:38.740 "data_size": 65536 00:12:38.740 }, 00:12:38.740 { 00:12:38.740 "name": "BaseBdev2", 00:12:38.740 "uuid": "733b8f70-7980-42d9-b1fc-5a32b2463154", 00:12:38.740 "is_configured": true, 00:12:38.740 "data_offset": 0, 00:12:38.740 "data_size": 65536 00:12:38.740 }, 00:12:38.740 { 00:12:38.740 "name": "BaseBdev3", 00:12:38.740 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:38.740 "is_configured": false, 00:12:38.740 "data_offset": 0, 00:12:38.740 "data_size": 0 00:12:38.740 }, 00:12:38.740 { 00:12:38.740 "name": "BaseBdev4", 00:12:38.740 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:38.740 "is_configured": false, 00:12:38.740 "data_offset": 0, 00:12:38.740 "data_size": 0 00:12:38.740 } 00:12:38.740 ] 00:12:38.740 }' 00:12:38.740 21:39:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:38.998 21:39:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.255 21:39:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:39.255 21:39:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.255 21:39:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.255 [2024-12-10 21:39:39.907322] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:39.255 BaseBdev3 00:12:39.255 21:39:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.255 21:39:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:12:39.255 21:39:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:12:39.255 21:39:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:39.255 21:39:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:39.255 21:39:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:39.255 21:39:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:39.255 21:39:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:39.255 21:39:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.255 21:39:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.255 21:39:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.255 21:39:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:39.255 21:39:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.255 21:39:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.255 [ 00:12:39.255 { 00:12:39.255 "name": "BaseBdev3", 00:12:39.255 "aliases": [ 00:12:39.255 "7f661c18-d239-430d-9615-e1f16e3a7870" 00:12:39.255 ], 00:12:39.255 "product_name": "Malloc disk", 00:12:39.255 "block_size": 512, 00:12:39.255 "num_blocks": 65536, 00:12:39.255 "uuid": "7f661c18-d239-430d-9615-e1f16e3a7870", 00:12:39.255 "assigned_rate_limits": { 00:12:39.255 "rw_ios_per_sec": 0, 00:12:39.255 "rw_mbytes_per_sec": 0, 00:12:39.255 "r_mbytes_per_sec": 0, 00:12:39.255 "w_mbytes_per_sec": 0 00:12:39.255 }, 00:12:39.255 "claimed": true, 00:12:39.255 "claim_type": "exclusive_write", 00:12:39.255 "zoned": false, 00:12:39.255 "supported_io_types": { 00:12:39.255 "read": true, 00:12:39.255 "write": true, 00:12:39.255 "unmap": true, 00:12:39.255 "flush": true, 00:12:39.255 "reset": true, 00:12:39.255 "nvme_admin": false, 00:12:39.255 "nvme_io": false, 00:12:39.255 "nvme_io_md": false, 00:12:39.255 "write_zeroes": true, 00:12:39.255 "zcopy": true, 00:12:39.255 "get_zone_info": false, 00:12:39.255 "zone_management": false, 00:12:39.255 "zone_append": false, 00:12:39.255 "compare": false, 00:12:39.255 "compare_and_write": false, 00:12:39.255 "abort": true, 00:12:39.255 "seek_hole": false, 00:12:39.255 "seek_data": false, 00:12:39.255 "copy": true, 00:12:39.255 "nvme_iov_md": false 00:12:39.255 }, 00:12:39.255 "memory_domains": [ 00:12:39.255 { 00:12:39.255 "dma_device_id": "system", 00:12:39.255 "dma_device_type": 1 00:12:39.255 }, 00:12:39.255 { 00:12:39.255 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:39.255 "dma_device_type": 2 00:12:39.255 } 00:12:39.255 ], 00:12:39.255 "driver_specific": {} 00:12:39.255 } 00:12:39.255 ] 00:12:39.255 21:39:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.255 21:39:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:39.255 21:39:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:39.255 21:39:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:39.255 21:39:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:39.255 21:39:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:39.255 21:39:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:39.255 21:39:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:39.255 21:39:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:39.255 21:39:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:39.255 21:39:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:39.255 21:39:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:39.255 21:39:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:39.255 21:39:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:39.255 21:39:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:39.255 21:39:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:39.255 21:39:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.255 21:39:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.255 21:39:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.255 21:39:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:39.255 "name": "Existed_Raid", 00:12:39.255 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:39.256 "strip_size_kb": 64, 00:12:39.256 "state": "configuring", 00:12:39.256 "raid_level": "concat", 00:12:39.256 "superblock": false, 00:12:39.256 "num_base_bdevs": 4, 00:12:39.256 "num_base_bdevs_discovered": 3, 00:12:39.256 "num_base_bdevs_operational": 4, 00:12:39.256 "base_bdevs_list": [ 00:12:39.256 { 00:12:39.256 "name": "BaseBdev1", 00:12:39.256 "uuid": "3abde4a3-e2cb-4886-8d51-4862e1708bc1", 00:12:39.256 "is_configured": true, 00:12:39.256 "data_offset": 0, 00:12:39.256 "data_size": 65536 00:12:39.256 }, 00:12:39.256 { 00:12:39.256 "name": "BaseBdev2", 00:12:39.256 "uuid": "733b8f70-7980-42d9-b1fc-5a32b2463154", 00:12:39.256 "is_configured": true, 00:12:39.256 "data_offset": 0, 00:12:39.256 "data_size": 65536 00:12:39.256 }, 00:12:39.256 { 00:12:39.256 "name": "BaseBdev3", 00:12:39.256 "uuid": "7f661c18-d239-430d-9615-e1f16e3a7870", 00:12:39.256 "is_configured": true, 00:12:39.256 "data_offset": 0, 00:12:39.256 "data_size": 65536 00:12:39.256 }, 00:12:39.256 { 00:12:39.256 "name": "BaseBdev4", 00:12:39.256 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:39.256 "is_configured": false, 00:12:39.256 "data_offset": 0, 00:12:39.256 "data_size": 0 00:12:39.256 } 00:12:39.256 ] 00:12:39.256 }' 00:12:39.256 21:39:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:39.256 21:39:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.820 21:39:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:12:39.820 21:39:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.820 21:39:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.820 [2024-12-10 21:39:40.493575] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:39.820 [2024-12-10 21:39:40.493624] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:39.820 [2024-12-10 21:39:40.493631] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:12:39.820 [2024-12-10 21:39:40.493908] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:39.820 [2024-12-10 21:39:40.494070] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:39.820 [2024-12-10 21:39:40.494083] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:12:39.820 [2024-12-10 21:39:40.494360] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:39.820 BaseBdev4 00:12:39.820 21:39:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.820 21:39:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:12:39.820 21:39:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:12:39.820 21:39:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:39.820 21:39:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:39.820 21:39:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:39.820 21:39:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:39.820 21:39:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:39.820 21:39:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.820 21:39:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.820 21:39:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.820 21:39:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:39.820 21:39:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.820 21:39:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.820 [ 00:12:39.820 { 00:12:39.820 "name": "BaseBdev4", 00:12:39.820 "aliases": [ 00:12:39.820 "e25b4c6e-f93f-4cf1-b065-21929d36386c" 00:12:39.820 ], 00:12:39.820 "product_name": "Malloc disk", 00:12:39.820 "block_size": 512, 00:12:39.820 "num_blocks": 65536, 00:12:39.820 "uuid": "e25b4c6e-f93f-4cf1-b065-21929d36386c", 00:12:39.820 "assigned_rate_limits": { 00:12:39.820 "rw_ios_per_sec": 0, 00:12:39.820 "rw_mbytes_per_sec": 0, 00:12:39.820 "r_mbytes_per_sec": 0, 00:12:39.820 "w_mbytes_per_sec": 0 00:12:39.820 }, 00:12:39.820 "claimed": true, 00:12:39.820 "claim_type": "exclusive_write", 00:12:39.820 "zoned": false, 00:12:39.820 "supported_io_types": { 00:12:39.820 "read": true, 00:12:39.820 "write": true, 00:12:39.820 "unmap": true, 00:12:39.820 "flush": true, 00:12:39.820 "reset": true, 00:12:39.820 "nvme_admin": false, 00:12:39.820 "nvme_io": false, 00:12:39.820 "nvme_io_md": false, 00:12:39.820 "write_zeroes": true, 00:12:39.820 "zcopy": true, 00:12:39.820 "get_zone_info": false, 00:12:39.820 "zone_management": false, 00:12:39.820 "zone_append": false, 00:12:39.820 "compare": false, 00:12:39.820 "compare_and_write": false, 00:12:39.820 "abort": true, 00:12:39.820 "seek_hole": false, 00:12:39.820 "seek_data": false, 00:12:39.820 "copy": true, 00:12:39.820 "nvme_iov_md": false 00:12:39.820 }, 00:12:39.820 "memory_domains": [ 00:12:39.820 { 00:12:39.820 "dma_device_id": "system", 00:12:39.820 "dma_device_type": 1 00:12:39.820 }, 00:12:39.820 { 00:12:39.820 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:39.820 "dma_device_type": 2 00:12:39.820 } 00:12:39.820 ], 00:12:39.820 "driver_specific": {} 00:12:39.820 } 00:12:39.820 ] 00:12:39.820 21:39:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.820 21:39:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:39.820 21:39:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:39.820 21:39:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:39.820 21:39:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:12:39.820 21:39:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:39.820 21:39:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:39.820 21:39:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:39.820 21:39:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:39.820 21:39:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:39.820 21:39:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:39.820 21:39:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:39.821 21:39:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:39.821 21:39:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:39.821 21:39:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:39.821 21:39:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:39.821 21:39:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.821 21:39:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.821 21:39:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.821 21:39:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:39.821 "name": "Existed_Raid", 00:12:39.821 "uuid": "93774049-1b11-4599-88c2-4dcca4784710", 00:12:39.821 "strip_size_kb": 64, 00:12:39.821 "state": "online", 00:12:39.821 "raid_level": "concat", 00:12:39.821 "superblock": false, 00:12:39.821 "num_base_bdevs": 4, 00:12:39.821 "num_base_bdevs_discovered": 4, 00:12:39.821 "num_base_bdevs_operational": 4, 00:12:39.821 "base_bdevs_list": [ 00:12:39.821 { 00:12:39.821 "name": "BaseBdev1", 00:12:39.821 "uuid": "3abde4a3-e2cb-4886-8d51-4862e1708bc1", 00:12:39.821 "is_configured": true, 00:12:39.821 "data_offset": 0, 00:12:39.821 "data_size": 65536 00:12:39.821 }, 00:12:39.821 { 00:12:39.821 "name": "BaseBdev2", 00:12:39.821 "uuid": "733b8f70-7980-42d9-b1fc-5a32b2463154", 00:12:39.821 "is_configured": true, 00:12:39.821 "data_offset": 0, 00:12:39.821 "data_size": 65536 00:12:39.821 }, 00:12:39.821 { 00:12:39.821 "name": "BaseBdev3", 00:12:39.821 "uuid": "7f661c18-d239-430d-9615-e1f16e3a7870", 00:12:39.821 "is_configured": true, 00:12:39.821 "data_offset": 0, 00:12:39.821 "data_size": 65536 00:12:39.821 }, 00:12:39.821 { 00:12:39.821 "name": "BaseBdev4", 00:12:39.821 "uuid": "e25b4c6e-f93f-4cf1-b065-21929d36386c", 00:12:39.821 "is_configured": true, 00:12:39.821 "data_offset": 0, 00:12:39.821 "data_size": 65536 00:12:39.821 } 00:12:39.821 ] 00:12:39.821 }' 00:12:39.821 21:39:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:39.821 21:39:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.412 21:39:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:12:40.412 21:39:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:40.412 21:39:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:40.412 21:39:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:40.412 21:39:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:40.412 21:39:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:40.412 21:39:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:40.412 21:39:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.412 21:39:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.412 21:39:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:40.412 [2024-12-10 21:39:40.981239] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:40.412 21:39:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.412 21:39:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:40.412 "name": "Existed_Raid", 00:12:40.412 "aliases": [ 00:12:40.412 "93774049-1b11-4599-88c2-4dcca4784710" 00:12:40.412 ], 00:12:40.412 "product_name": "Raid Volume", 00:12:40.412 "block_size": 512, 00:12:40.412 "num_blocks": 262144, 00:12:40.412 "uuid": "93774049-1b11-4599-88c2-4dcca4784710", 00:12:40.412 "assigned_rate_limits": { 00:12:40.412 "rw_ios_per_sec": 0, 00:12:40.412 "rw_mbytes_per_sec": 0, 00:12:40.412 "r_mbytes_per_sec": 0, 00:12:40.412 "w_mbytes_per_sec": 0 00:12:40.412 }, 00:12:40.412 "claimed": false, 00:12:40.412 "zoned": false, 00:12:40.412 "supported_io_types": { 00:12:40.412 "read": true, 00:12:40.412 "write": true, 00:12:40.412 "unmap": true, 00:12:40.412 "flush": true, 00:12:40.412 "reset": true, 00:12:40.412 "nvme_admin": false, 00:12:40.412 "nvme_io": false, 00:12:40.412 "nvme_io_md": false, 00:12:40.412 "write_zeroes": true, 00:12:40.412 "zcopy": false, 00:12:40.412 "get_zone_info": false, 00:12:40.412 "zone_management": false, 00:12:40.412 "zone_append": false, 00:12:40.412 "compare": false, 00:12:40.412 "compare_and_write": false, 00:12:40.412 "abort": false, 00:12:40.412 "seek_hole": false, 00:12:40.412 "seek_data": false, 00:12:40.412 "copy": false, 00:12:40.412 "nvme_iov_md": false 00:12:40.412 }, 00:12:40.412 "memory_domains": [ 00:12:40.412 { 00:12:40.412 "dma_device_id": "system", 00:12:40.412 "dma_device_type": 1 00:12:40.412 }, 00:12:40.412 { 00:12:40.412 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:40.412 "dma_device_type": 2 00:12:40.412 }, 00:12:40.412 { 00:12:40.412 "dma_device_id": "system", 00:12:40.412 "dma_device_type": 1 00:12:40.412 }, 00:12:40.412 { 00:12:40.412 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:40.412 "dma_device_type": 2 00:12:40.412 }, 00:12:40.412 { 00:12:40.412 "dma_device_id": "system", 00:12:40.412 "dma_device_type": 1 00:12:40.412 }, 00:12:40.412 { 00:12:40.412 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:40.412 "dma_device_type": 2 00:12:40.412 }, 00:12:40.412 { 00:12:40.412 "dma_device_id": "system", 00:12:40.412 "dma_device_type": 1 00:12:40.412 }, 00:12:40.412 { 00:12:40.412 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:40.412 "dma_device_type": 2 00:12:40.412 } 00:12:40.412 ], 00:12:40.412 "driver_specific": { 00:12:40.412 "raid": { 00:12:40.413 "uuid": "93774049-1b11-4599-88c2-4dcca4784710", 00:12:40.413 "strip_size_kb": 64, 00:12:40.413 "state": "online", 00:12:40.413 "raid_level": "concat", 00:12:40.413 "superblock": false, 00:12:40.413 "num_base_bdevs": 4, 00:12:40.413 "num_base_bdevs_discovered": 4, 00:12:40.413 "num_base_bdevs_operational": 4, 00:12:40.413 "base_bdevs_list": [ 00:12:40.413 { 00:12:40.413 "name": "BaseBdev1", 00:12:40.413 "uuid": "3abde4a3-e2cb-4886-8d51-4862e1708bc1", 00:12:40.413 "is_configured": true, 00:12:40.413 "data_offset": 0, 00:12:40.413 "data_size": 65536 00:12:40.413 }, 00:12:40.413 { 00:12:40.413 "name": "BaseBdev2", 00:12:40.413 "uuid": "733b8f70-7980-42d9-b1fc-5a32b2463154", 00:12:40.413 "is_configured": true, 00:12:40.413 "data_offset": 0, 00:12:40.413 "data_size": 65536 00:12:40.413 }, 00:12:40.413 { 00:12:40.413 "name": "BaseBdev3", 00:12:40.413 "uuid": "7f661c18-d239-430d-9615-e1f16e3a7870", 00:12:40.413 "is_configured": true, 00:12:40.413 "data_offset": 0, 00:12:40.413 "data_size": 65536 00:12:40.413 }, 00:12:40.413 { 00:12:40.413 "name": "BaseBdev4", 00:12:40.413 "uuid": "e25b4c6e-f93f-4cf1-b065-21929d36386c", 00:12:40.413 "is_configured": true, 00:12:40.413 "data_offset": 0, 00:12:40.413 "data_size": 65536 00:12:40.413 } 00:12:40.413 ] 00:12:40.413 } 00:12:40.413 } 00:12:40.413 }' 00:12:40.413 21:39:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:40.413 21:39:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:12:40.413 BaseBdev2 00:12:40.413 BaseBdev3 00:12:40.413 BaseBdev4' 00:12:40.413 21:39:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:40.413 21:39:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:40.413 21:39:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:40.413 21:39:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:12:40.413 21:39:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:40.413 21:39:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.413 21:39:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.413 21:39:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.413 21:39:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:40.413 21:39:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:40.413 21:39:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:40.413 21:39:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:40.413 21:39:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:40.413 21:39:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.413 21:39:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.413 21:39:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.671 21:39:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:40.671 21:39:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:40.671 21:39:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:40.671 21:39:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:40.671 21:39:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:40.671 21:39:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.671 21:39:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.671 21:39:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.671 21:39:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:40.671 21:39:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:40.671 21:39:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:40.671 21:39:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:12:40.671 21:39:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:40.671 21:39:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.671 21:39:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.671 21:39:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.671 21:39:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:40.671 21:39:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:40.671 21:39:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:40.671 21:39:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.671 21:39:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.671 [2024-12-10 21:39:41.308367] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:40.671 [2024-12-10 21:39:41.308502] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:40.671 [2024-12-10 21:39:41.308570] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:40.671 21:39:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.671 21:39:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:12:40.671 21:39:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:12:40.671 21:39:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:40.671 21:39:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:12:40.671 21:39:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:12:40.671 21:39:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:12:40.671 21:39:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:40.671 21:39:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:12:40.671 21:39:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:40.671 21:39:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:40.671 21:39:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:40.671 21:39:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:40.671 21:39:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:40.671 21:39:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:40.671 21:39:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:40.671 21:39:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:40.671 21:39:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:40.671 21:39:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.671 21:39:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.671 21:39:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.929 21:39:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:40.929 "name": "Existed_Raid", 00:12:40.929 "uuid": "93774049-1b11-4599-88c2-4dcca4784710", 00:12:40.929 "strip_size_kb": 64, 00:12:40.929 "state": "offline", 00:12:40.930 "raid_level": "concat", 00:12:40.930 "superblock": false, 00:12:40.930 "num_base_bdevs": 4, 00:12:40.930 "num_base_bdevs_discovered": 3, 00:12:40.930 "num_base_bdevs_operational": 3, 00:12:40.930 "base_bdevs_list": [ 00:12:40.930 { 00:12:40.930 "name": null, 00:12:40.930 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:40.930 "is_configured": false, 00:12:40.930 "data_offset": 0, 00:12:40.930 "data_size": 65536 00:12:40.930 }, 00:12:40.930 { 00:12:40.930 "name": "BaseBdev2", 00:12:40.930 "uuid": "733b8f70-7980-42d9-b1fc-5a32b2463154", 00:12:40.930 "is_configured": true, 00:12:40.930 "data_offset": 0, 00:12:40.930 "data_size": 65536 00:12:40.930 }, 00:12:40.930 { 00:12:40.930 "name": "BaseBdev3", 00:12:40.930 "uuid": "7f661c18-d239-430d-9615-e1f16e3a7870", 00:12:40.930 "is_configured": true, 00:12:40.930 "data_offset": 0, 00:12:40.930 "data_size": 65536 00:12:40.930 }, 00:12:40.930 { 00:12:40.930 "name": "BaseBdev4", 00:12:40.930 "uuid": "e25b4c6e-f93f-4cf1-b065-21929d36386c", 00:12:40.930 "is_configured": true, 00:12:40.930 "data_offset": 0, 00:12:40.930 "data_size": 65536 00:12:40.930 } 00:12:40.930 ] 00:12:40.930 }' 00:12:40.930 21:39:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:40.930 21:39:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.188 21:39:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:12:41.188 21:39:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:41.188 21:39:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:41.188 21:39:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:41.188 21:39:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.188 21:39:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.188 21:39:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.188 21:39:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:41.188 21:39:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:41.188 21:39:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:12:41.188 21:39:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.188 21:39:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.188 [2024-12-10 21:39:41.923094] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:41.447 21:39:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.447 21:39:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:41.447 21:39:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:41.447 21:39:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:41.447 21:39:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:41.447 21:39:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.447 21:39:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.447 21:39:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.447 21:39:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:41.447 21:39:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:41.447 21:39:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:12:41.447 21:39:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.448 21:39:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.448 [2024-12-10 21:39:42.078285] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:41.448 21:39:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.448 21:39:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:41.448 21:39:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:41.448 21:39:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:41.448 21:39:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:41.448 21:39:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.448 21:39:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.448 21:39:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.448 21:39:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:41.448 21:39:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:41.448 21:39:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:12:41.448 21:39:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.448 21:39:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.448 [2024-12-10 21:39:42.225584] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:12:41.448 [2024-12-10 21:39:42.225634] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:12:41.707 21:39:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.707 21:39:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:41.707 21:39:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:41.707 21:39:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:12:41.707 21:39:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:41.707 21:39:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.707 21:39:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.707 21:39:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.707 21:39:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:12:41.707 21:39:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:12:41.707 21:39:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:12:41.707 21:39:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:12:41.707 21:39:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:41.707 21:39:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:41.707 21:39:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.707 21:39:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.707 BaseBdev2 00:12:41.707 21:39:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.707 21:39:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:12:41.707 21:39:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:12:41.707 21:39:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:41.707 21:39:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:41.707 21:39:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:41.707 21:39:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:41.707 21:39:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:41.707 21:39:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.707 21:39:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.707 21:39:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.707 21:39:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:41.707 21:39:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.707 21:39:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.707 [ 00:12:41.707 { 00:12:41.707 "name": "BaseBdev2", 00:12:41.707 "aliases": [ 00:12:41.707 "38f0639b-41ee-4d7c-bb4b-b2018d071ebe" 00:12:41.707 ], 00:12:41.707 "product_name": "Malloc disk", 00:12:41.707 "block_size": 512, 00:12:41.707 "num_blocks": 65536, 00:12:41.707 "uuid": "38f0639b-41ee-4d7c-bb4b-b2018d071ebe", 00:12:41.707 "assigned_rate_limits": { 00:12:41.707 "rw_ios_per_sec": 0, 00:12:41.707 "rw_mbytes_per_sec": 0, 00:12:41.707 "r_mbytes_per_sec": 0, 00:12:41.707 "w_mbytes_per_sec": 0 00:12:41.707 }, 00:12:41.707 "claimed": false, 00:12:41.707 "zoned": false, 00:12:41.707 "supported_io_types": { 00:12:41.707 "read": true, 00:12:41.707 "write": true, 00:12:41.707 "unmap": true, 00:12:41.707 "flush": true, 00:12:41.707 "reset": true, 00:12:41.707 "nvme_admin": false, 00:12:41.707 "nvme_io": false, 00:12:41.707 "nvme_io_md": false, 00:12:41.707 "write_zeroes": true, 00:12:41.707 "zcopy": true, 00:12:41.707 "get_zone_info": false, 00:12:41.707 "zone_management": false, 00:12:41.707 "zone_append": false, 00:12:41.707 "compare": false, 00:12:41.707 "compare_and_write": false, 00:12:41.707 "abort": true, 00:12:41.707 "seek_hole": false, 00:12:41.707 "seek_data": false, 00:12:41.707 "copy": true, 00:12:41.707 "nvme_iov_md": false 00:12:41.707 }, 00:12:41.707 "memory_domains": [ 00:12:41.707 { 00:12:41.707 "dma_device_id": "system", 00:12:41.707 "dma_device_type": 1 00:12:41.707 }, 00:12:41.707 { 00:12:41.707 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:41.707 "dma_device_type": 2 00:12:41.707 } 00:12:41.707 ], 00:12:41.707 "driver_specific": {} 00:12:41.707 } 00:12:41.707 ] 00:12:41.707 21:39:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.707 21:39:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:41.707 21:39:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:41.707 21:39:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:41.707 21:39:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:41.707 21:39:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.707 21:39:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.968 BaseBdev3 00:12:41.968 21:39:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.968 21:39:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:12:41.968 21:39:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:12:41.968 21:39:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:41.968 21:39:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:41.968 21:39:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:41.968 21:39:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:41.968 21:39:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:41.968 21:39:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.968 21:39:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.968 21:39:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.968 21:39:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:41.968 21:39:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.968 21:39:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.968 [ 00:12:41.968 { 00:12:41.968 "name": "BaseBdev3", 00:12:41.968 "aliases": [ 00:12:41.968 "6640fc64-d7d4-4629-a321-ab446a59c212" 00:12:41.968 ], 00:12:41.968 "product_name": "Malloc disk", 00:12:41.968 "block_size": 512, 00:12:41.968 "num_blocks": 65536, 00:12:41.968 "uuid": "6640fc64-d7d4-4629-a321-ab446a59c212", 00:12:41.968 "assigned_rate_limits": { 00:12:41.968 "rw_ios_per_sec": 0, 00:12:41.968 "rw_mbytes_per_sec": 0, 00:12:41.968 "r_mbytes_per_sec": 0, 00:12:41.968 "w_mbytes_per_sec": 0 00:12:41.968 }, 00:12:41.968 "claimed": false, 00:12:41.968 "zoned": false, 00:12:41.968 "supported_io_types": { 00:12:41.968 "read": true, 00:12:41.968 "write": true, 00:12:41.968 "unmap": true, 00:12:41.968 "flush": true, 00:12:41.968 "reset": true, 00:12:41.968 "nvme_admin": false, 00:12:41.968 "nvme_io": false, 00:12:41.968 "nvme_io_md": false, 00:12:41.968 "write_zeroes": true, 00:12:41.968 "zcopy": true, 00:12:41.968 "get_zone_info": false, 00:12:41.968 "zone_management": false, 00:12:41.968 "zone_append": false, 00:12:41.968 "compare": false, 00:12:41.968 "compare_and_write": false, 00:12:41.968 "abort": true, 00:12:41.968 "seek_hole": false, 00:12:41.968 "seek_data": false, 00:12:41.968 "copy": true, 00:12:41.968 "nvme_iov_md": false 00:12:41.968 }, 00:12:41.968 "memory_domains": [ 00:12:41.968 { 00:12:41.968 "dma_device_id": "system", 00:12:41.968 "dma_device_type": 1 00:12:41.968 }, 00:12:41.968 { 00:12:41.968 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:41.968 "dma_device_type": 2 00:12:41.968 } 00:12:41.968 ], 00:12:41.968 "driver_specific": {} 00:12:41.968 } 00:12:41.968 ] 00:12:41.968 21:39:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.968 21:39:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:41.968 21:39:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:41.968 21:39:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:41.968 21:39:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:12:41.968 21:39:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.968 21:39:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.968 BaseBdev4 00:12:41.968 21:39:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.968 21:39:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:12:41.968 21:39:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:12:41.968 21:39:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:41.968 21:39:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:41.968 21:39:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:41.968 21:39:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:41.968 21:39:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:41.968 21:39:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.968 21:39:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.968 21:39:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.968 21:39:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:41.968 21:39:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.968 21:39:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.968 [ 00:12:41.968 { 00:12:41.968 "name": "BaseBdev4", 00:12:41.968 "aliases": [ 00:12:41.968 "67010ef3-a7b5-4407-8bb4-6a96bb9e5728" 00:12:41.968 ], 00:12:41.968 "product_name": "Malloc disk", 00:12:41.968 "block_size": 512, 00:12:41.968 "num_blocks": 65536, 00:12:41.968 "uuid": "67010ef3-a7b5-4407-8bb4-6a96bb9e5728", 00:12:41.968 "assigned_rate_limits": { 00:12:41.968 "rw_ios_per_sec": 0, 00:12:41.968 "rw_mbytes_per_sec": 0, 00:12:41.968 "r_mbytes_per_sec": 0, 00:12:41.968 "w_mbytes_per_sec": 0 00:12:41.968 }, 00:12:41.968 "claimed": false, 00:12:41.968 "zoned": false, 00:12:41.968 "supported_io_types": { 00:12:41.968 "read": true, 00:12:41.968 "write": true, 00:12:41.968 "unmap": true, 00:12:41.968 "flush": true, 00:12:41.968 "reset": true, 00:12:41.968 "nvme_admin": false, 00:12:41.968 "nvme_io": false, 00:12:41.968 "nvme_io_md": false, 00:12:41.968 "write_zeroes": true, 00:12:41.968 "zcopy": true, 00:12:41.968 "get_zone_info": false, 00:12:41.968 "zone_management": false, 00:12:41.968 "zone_append": false, 00:12:41.968 "compare": false, 00:12:41.968 "compare_and_write": false, 00:12:41.968 "abort": true, 00:12:41.968 "seek_hole": false, 00:12:41.968 "seek_data": false, 00:12:41.968 "copy": true, 00:12:41.968 "nvme_iov_md": false 00:12:41.968 }, 00:12:41.968 "memory_domains": [ 00:12:41.968 { 00:12:41.968 "dma_device_id": "system", 00:12:41.968 "dma_device_type": 1 00:12:41.968 }, 00:12:41.968 { 00:12:41.968 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:41.968 "dma_device_type": 2 00:12:41.968 } 00:12:41.968 ], 00:12:41.968 "driver_specific": {} 00:12:41.968 } 00:12:41.968 ] 00:12:41.968 21:39:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.968 21:39:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:41.968 21:39:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:41.968 21:39:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:41.968 21:39:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:41.968 21:39:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.968 21:39:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.968 [2024-12-10 21:39:42.621527] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:41.968 [2024-12-10 21:39:42.621627] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:41.968 [2024-12-10 21:39:42.621691] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:41.968 [2024-12-10 21:39:42.623912] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:41.968 [2024-12-10 21:39:42.624015] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:41.969 21:39:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.969 21:39:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:41.969 21:39:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:41.969 21:39:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:41.969 21:39:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:41.969 21:39:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:41.969 21:39:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:41.969 21:39:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:41.969 21:39:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:41.969 21:39:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:41.969 21:39:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:41.969 21:39:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:41.969 21:39:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:41.969 21:39:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.969 21:39:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.969 21:39:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.969 21:39:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:41.969 "name": "Existed_Raid", 00:12:41.969 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:41.969 "strip_size_kb": 64, 00:12:41.969 "state": "configuring", 00:12:41.969 "raid_level": "concat", 00:12:41.969 "superblock": false, 00:12:41.969 "num_base_bdevs": 4, 00:12:41.969 "num_base_bdevs_discovered": 3, 00:12:41.969 "num_base_bdevs_operational": 4, 00:12:41.969 "base_bdevs_list": [ 00:12:41.969 { 00:12:41.969 "name": "BaseBdev1", 00:12:41.969 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:41.969 "is_configured": false, 00:12:41.969 "data_offset": 0, 00:12:41.969 "data_size": 0 00:12:41.969 }, 00:12:41.969 { 00:12:41.969 "name": "BaseBdev2", 00:12:41.969 "uuid": "38f0639b-41ee-4d7c-bb4b-b2018d071ebe", 00:12:41.969 "is_configured": true, 00:12:41.969 "data_offset": 0, 00:12:41.969 "data_size": 65536 00:12:41.969 }, 00:12:41.969 { 00:12:41.969 "name": "BaseBdev3", 00:12:41.969 "uuid": "6640fc64-d7d4-4629-a321-ab446a59c212", 00:12:41.969 "is_configured": true, 00:12:41.969 "data_offset": 0, 00:12:41.969 "data_size": 65536 00:12:41.969 }, 00:12:41.969 { 00:12:41.969 "name": "BaseBdev4", 00:12:41.969 "uuid": "67010ef3-a7b5-4407-8bb4-6a96bb9e5728", 00:12:41.969 "is_configured": true, 00:12:41.969 "data_offset": 0, 00:12:41.969 "data_size": 65536 00:12:41.969 } 00:12:41.969 ] 00:12:41.969 }' 00:12:41.969 21:39:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:41.969 21:39:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.537 21:39:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:12:42.537 21:39:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.537 21:39:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.537 [2024-12-10 21:39:43.040845] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:42.537 21:39:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.537 21:39:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:42.537 21:39:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:42.537 21:39:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:42.537 21:39:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:42.537 21:39:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:42.537 21:39:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:42.537 21:39:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:42.537 21:39:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:42.537 21:39:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:42.537 21:39:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:42.537 21:39:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:42.537 21:39:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.537 21:39:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.537 21:39:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:42.537 21:39:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.538 21:39:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:42.538 "name": "Existed_Raid", 00:12:42.538 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:42.538 "strip_size_kb": 64, 00:12:42.538 "state": "configuring", 00:12:42.538 "raid_level": "concat", 00:12:42.538 "superblock": false, 00:12:42.538 "num_base_bdevs": 4, 00:12:42.538 "num_base_bdevs_discovered": 2, 00:12:42.538 "num_base_bdevs_operational": 4, 00:12:42.538 "base_bdevs_list": [ 00:12:42.538 { 00:12:42.538 "name": "BaseBdev1", 00:12:42.538 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:42.538 "is_configured": false, 00:12:42.538 "data_offset": 0, 00:12:42.538 "data_size": 0 00:12:42.538 }, 00:12:42.538 { 00:12:42.538 "name": null, 00:12:42.538 "uuid": "38f0639b-41ee-4d7c-bb4b-b2018d071ebe", 00:12:42.538 "is_configured": false, 00:12:42.538 "data_offset": 0, 00:12:42.538 "data_size": 65536 00:12:42.538 }, 00:12:42.538 { 00:12:42.538 "name": "BaseBdev3", 00:12:42.538 "uuid": "6640fc64-d7d4-4629-a321-ab446a59c212", 00:12:42.538 "is_configured": true, 00:12:42.538 "data_offset": 0, 00:12:42.538 "data_size": 65536 00:12:42.538 }, 00:12:42.538 { 00:12:42.538 "name": "BaseBdev4", 00:12:42.538 "uuid": "67010ef3-a7b5-4407-8bb4-6a96bb9e5728", 00:12:42.538 "is_configured": true, 00:12:42.538 "data_offset": 0, 00:12:42.538 "data_size": 65536 00:12:42.538 } 00:12:42.538 ] 00:12:42.538 }' 00:12:42.538 21:39:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:42.538 21:39:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.798 21:39:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:42.798 21:39:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:42.798 21:39:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.798 21:39:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.798 21:39:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.798 21:39:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:12:42.798 21:39:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:42.798 21:39:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.798 21:39:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.059 [2024-12-10 21:39:43.581916] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:43.059 BaseBdev1 00:12:43.059 21:39:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.059 21:39:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:12:43.059 21:39:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:12:43.059 21:39:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:43.059 21:39:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:43.059 21:39:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:43.059 21:39:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:43.059 21:39:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:43.059 21:39:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.059 21:39:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.059 21:39:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.059 21:39:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:43.059 21:39:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.059 21:39:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.059 [ 00:12:43.059 { 00:12:43.059 "name": "BaseBdev1", 00:12:43.059 "aliases": [ 00:12:43.059 "668d612f-3834-4909-9b6c-7c5e6968c69d" 00:12:43.059 ], 00:12:43.059 "product_name": "Malloc disk", 00:12:43.059 "block_size": 512, 00:12:43.059 "num_blocks": 65536, 00:12:43.059 "uuid": "668d612f-3834-4909-9b6c-7c5e6968c69d", 00:12:43.059 "assigned_rate_limits": { 00:12:43.059 "rw_ios_per_sec": 0, 00:12:43.059 "rw_mbytes_per_sec": 0, 00:12:43.059 "r_mbytes_per_sec": 0, 00:12:43.059 "w_mbytes_per_sec": 0 00:12:43.059 }, 00:12:43.059 "claimed": true, 00:12:43.059 "claim_type": "exclusive_write", 00:12:43.059 "zoned": false, 00:12:43.059 "supported_io_types": { 00:12:43.059 "read": true, 00:12:43.059 "write": true, 00:12:43.059 "unmap": true, 00:12:43.059 "flush": true, 00:12:43.059 "reset": true, 00:12:43.059 "nvme_admin": false, 00:12:43.059 "nvme_io": false, 00:12:43.059 "nvme_io_md": false, 00:12:43.059 "write_zeroes": true, 00:12:43.059 "zcopy": true, 00:12:43.059 "get_zone_info": false, 00:12:43.059 "zone_management": false, 00:12:43.059 "zone_append": false, 00:12:43.059 "compare": false, 00:12:43.059 "compare_and_write": false, 00:12:43.059 "abort": true, 00:12:43.059 "seek_hole": false, 00:12:43.059 "seek_data": false, 00:12:43.060 "copy": true, 00:12:43.060 "nvme_iov_md": false 00:12:43.060 }, 00:12:43.060 "memory_domains": [ 00:12:43.060 { 00:12:43.060 "dma_device_id": "system", 00:12:43.060 "dma_device_type": 1 00:12:43.060 }, 00:12:43.060 { 00:12:43.060 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:43.060 "dma_device_type": 2 00:12:43.060 } 00:12:43.060 ], 00:12:43.060 "driver_specific": {} 00:12:43.060 } 00:12:43.060 ] 00:12:43.060 21:39:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.060 21:39:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:43.060 21:39:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:43.060 21:39:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:43.060 21:39:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:43.060 21:39:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:43.060 21:39:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:43.060 21:39:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:43.060 21:39:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:43.060 21:39:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:43.060 21:39:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:43.060 21:39:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:43.060 21:39:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:43.060 21:39:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:43.060 21:39:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.060 21:39:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.060 21:39:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.060 21:39:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:43.060 "name": "Existed_Raid", 00:12:43.060 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:43.060 "strip_size_kb": 64, 00:12:43.060 "state": "configuring", 00:12:43.060 "raid_level": "concat", 00:12:43.060 "superblock": false, 00:12:43.060 "num_base_bdevs": 4, 00:12:43.060 "num_base_bdevs_discovered": 3, 00:12:43.060 "num_base_bdevs_operational": 4, 00:12:43.060 "base_bdevs_list": [ 00:12:43.060 { 00:12:43.060 "name": "BaseBdev1", 00:12:43.060 "uuid": "668d612f-3834-4909-9b6c-7c5e6968c69d", 00:12:43.060 "is_configured": true, 00:12:43.060 "data_offset": 0, 00:12:43.060 "data_size": 65536 00:12:43.060 }, 00:12:43.060 { 00:12:43.060 "name": null, 00:12:43.060 "uuid": "38f0639b-41ee-4d7c-bb4b-b2018d071ebe", 00:12:43.060 "is_configured": false, 00:12:43.060 "data_offset": 0, 00:12:43.060 "data_size": 65536 00:12:43.060 }, 00:12:43.060 { 00:12:43.060 "name": "BaseBdev3", 00:12:43.060 "uuid": "6640fc64-d7d4-4629-a321-ab446a59c212", 00:12:43.060 "is_configured": true, 00:12:43.060 "data_offset": 0, 00:12:43.060 "data_size": 65536 00:12:43.060 }, 00:12:43.060 { 00:12:43.060 "name": "BaseBdev4", 00:12:43.060 "uuid": "67010ef3-a7b5-4407-8bb4-6a96bb9e5728", 00:12:43.060 "is_configured": true, 00:12:43.060 "data_offset": 0, 00:12:43.060 "data_size": 65536 00:12:43.060 } 00:12:43.060 ] 00:12:43.060 }' 00:12:43.060 21:39:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:43.060 21:39:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.319 21:39:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:43.319 21:39:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:43.319 21:39:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.319 21:39:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.319 21:39:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.577 21:39:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:12:43.577 21:39:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:12:43.577 21:39:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.577 21:39:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.577 [2024-12-10 21:39:44.121179] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:43.577 21:39:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.577 21:39:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:43.577 21:39:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:43.577 21:39:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:43.577 21:39:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:43.577 21:39:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:43.577 21:39:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:43.577 21:39:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:43.577 21:39:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:43.577 21:39:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:43.577 21:39:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:43.577 21:39:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:43.577 21:39:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:43.577 21:39:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.577 21:39:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.577 21:39:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.577 21:39:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:43.577 "name": "Existed_Raid", 00:12:43.577 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:43.577 "strip_size_kb": 64, 00:12:43.577 "state": "configuring", 00:12:43.577 "raid_level": "concat", 00:12:43.577 "superblock": false, 00:12:43.577 "num_base_bdevs": 4, 00:12:43.578 "num_base_bdevs_discovered": 2, 00:12:43.578 "num_base_bdevs_operational": 4, 00:12:43.578 "base_bdevs_list": [ 00:12:43.578 { 00:12:43.578 "name": "BaseBdev1", 00:12:43.578 "uuid": "668d612f-3834-4909-9b6c-7c5e6968c69d", 00:12:43.578 "is_configured": true, 00:12:43.578 "data_offset": 0, 00:12:43.578 "data_size": 65536 00:12:43.578 }, 00:12:43.578 { 00:12:43.578 "name": null, 00:12:43.578 "uuid": "38f0639b-41ee-4d7c-bb4b-b2018d071ebe", 00:12:43.578 "is_configured": false, 00:12:43.578 "data_offset": 0, 00:12:43.578 "data_size": 65536 00:12:43.578 }, 00:12:43.578 { 00:12:43.578 "name": null, 00:12:43.578 "uuid": "6640fc64-d7d4-4629-a321-ab446a59c212", 00:12:43.578 "is_configured": false, 00:12:43.578 "data_offset": 0, 00:12:43.578 "data_size": 65536 00:12:43.578 }, 00:12:43.578 { 00:12:43.578 "name": "BaseBdev4", 00:12:43.578 "uuid": "67010ef3-a7b5-4407-8bb4-6a96bb9e5728", 00:12:43.578 "is_configured": true, 00:12:43.578 "data_offset": 0, 00:12:43.578 "data_size": 65536 00:12:43.578 } 00:12:43.578 ] 00:12:43.578 }' 00:12:43.578 21:39:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:43.578 21:39:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.854 21:39:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:43.854 21:39:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:43.854 21:39:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.854 21:39:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.854 21:39:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.854 21:39:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:12:43.854 21:39:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:12:43.854 21:39:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.854 21:39:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.113 [2024-12-10 21:39:44.640256] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:44.113 21:39:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.113 21:39:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:44.113 21:39:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:44.113 21:39:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:44.113 21:39:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:44.113 21:39:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:44.113 21:39:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:44.113 21:39:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:44.113 21:39:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:44.113 21:39:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:44.113 21:39:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:44.113 21:39:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:44.113 21:39:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:44.113 21:39:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.113 21:39:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.113 21:39:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.113 21:39:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:44.113 "name": "Existed_Raid", 00:12:44.113 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:44.113 "strip_size_kb": 64, 00:12:44.113 "state": "configuring", 00:12:44.113 "raid_level": "concat", 00:12:44.113 "superblock": false, 00:12:44.113 "num_base_bdevs": 4, 00:12:44.113 "num_base_bdevs_discovered": 3, 00:12:44.113 "num_base_bdevs_operational": 4, 00:12:44.113 "base_bdevs_list": [ 00:12:44.113 { 00:12:44.113 "name": "BaseBdev1", 00:12:44.113 "uuid": "668d612f-3834-4909-9b6c-7c5e6968c69d", 00:12:44.113 "is_configured": true, 00:12:44.113 "data_offset": 0, 00:12:44.113 "data_size": 65536 00:12:44.113 }, 00:12:44.113 { 00:12:44.113 "name": null, 00:12:44.113 "uuid": "38f0639b-41ee-4d7c-bb4b-b2018d071ebe", 00:12:44.113 "is_configured": false, 00:12:44.113 "data_offset": 0, 00:12:44.113 "data_size": 65536 00:12:44.113 }, 00:12:44.113 { 00:12:44.113 "name": "BaseBdev3", 00:12:44.113 "uuid": "6640fc64-d7d4-4629-a321-ab446a59c212", 00:12:44.113 "is_configured": true, 00:12:44.113 "data_offset": 0, 00:12:44.113 "data_size": 65536 00:12:44.113 }, 00:12:44.113 { 00:12:44.113 "name": "BaseBdev4", 00:12:44.113 "uuid": "67010ef3-a7b5-4407-8bb4-6a96bb9e5728", 00:12:44.113 "is_configured": true, 00:12:44.113 "data_offset": 0, 00:12:44.113 "data_size": 65536 00:12:44.113 } 00:12:44.113 ] 00:12:44.113 }' 00:12:44.113 21:39:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:44.114 21:39:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.373 21:39:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:44.373 21:39:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:44.373 21:39:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.373 21:39:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.373 21:39:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.373 21:39:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:12:44.373 21:39:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:44.373 21:39:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.373 21:39:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.373 [2024-12-10 21:39:45.091607] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:44.631 21:39:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.631 21:39:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:44.631 21:39:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:44.631 21:39:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:44.631 21:39:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:44.631 21:39:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:44.631 21:39:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:44.631 21:39:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:44.631 21:39:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:44.631 21:39:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:44.631 21:39:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:44.631 21:39:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:44.631 21:39:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.631 21:39:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:44.631 21:39:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.631 21:39:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.631 21:39:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:44.631 "name": "Existed_Raid", 00:12:44.631 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:44.631 "strip_size_kb": 64, 00:12:44.631 "state": "configuring", 00:12:44.631 "raid_level": "concat", 00:12:44.631 "superblock": false, 00:12:44.631 "num_base_bdevs": 4, 00:12:44.631 "num_base_bdevs_discovered": 2, 00:12:44.631 "num_base_bdevs_operational": 4, 00:12:44.631 "base_bdevs_list": [ 00:12:44.631 { 00:12:44.631 "name": null, 00:12:44.631 "uuid": "668d612f-3834-4909-9b6c-7c5e6968c69d", 00:12:44.631 "is_configured": false, 00:12:44.631 "data_offset": 0, 00:12:44.631 "data_size": 65536 00:12:44.631 }, 00:12:44.631 { 00:12:44.631 "name": null, 00:12:44.631 "uuid": "38f0639b-41ee-4d7c-bb4b-b2018d071ebe", 00:12:44.631 "is_configured": false, 00:12:44.631 "data_offset": 0, 00:12:44.631 "data_size": 65536 00:12:44.632 }, 00:12:44.632 { 00:12:44.632 "name": "BaseBdev3", 00:12:44.632 "uuid": "6640fc64-d7d4-4629-a321-ab446a59c212", 00:12:44.632 "is_configured": true, 00:12:44.632 "data_offset": 0, 00:12:44.632 "data_size": 65536 00:12:44.632 }, 00:12:44.632 { 00:12:44.632 "name": "BaseBdev4", 00:12:44.632 "uuid": "67010ef3-a7b5-4407-8bb4-6a96bb9e5728", 00:12:44.632 "is_configured": true, 00:12:44.632 "data_offset": 0, 00:12:44.632 "data_size": 65536 00:12:44.632 } 00:12:44.632 ] 00:12:44.632 }' 00:12:44.632 21:39:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:44.632 21:39:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.891 21:39:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:44.891 21:39:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:44.891 21:39:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.891 21:39:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.891 21:39:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.891 21:39:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:12:44.891 21:39:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:12:44.891 21:39:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.891 21:39:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.891 [2024-12-10 21:39:45.619964] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:44.891 21:39:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.891 21:39:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:44.891 21:39:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:44.891 21:39:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:44.891 21:39:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:44.891 21:39:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:44.891 21:39:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:44.891 21:39:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:44.891 21:39:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:44.891 21:39:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:44.891 21:39:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:44.891 21:39:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:44.891 21:39:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:44.891 21:39:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.891 21:39:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.891 21:39:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.150 21:39:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:45.150 "name": "Existed_Raid", 00:12:45.150 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:45.150 "strip_size_kb": 64, 00:12:45.150 "state": "configuring", 00:12:45.150 "raid_level": "concat", 00:12:45.150 "superblock": false, 00:12:45.150 "num_base_bdevs": 4, 00:12:45.150 "num_base_bdevs_discovered": 3, 00:12:45.150 "num_base_bdevs_operational": 4, 00:12:45.150 "base_bdevs_list": [ 00:12:45.150 { 00:12:45.150 "name": null, 00:12:45.150 "uuid": "668d612f-3834-4909-9b6c-7c5e6968c69d", 00:12:45.150 "is_configured": false, 00:12:45.150 "data_offset": 0, 00:12:45.150 "data_size": 65536 00:12:45.150 }, 00:12:45.150 { 00:12:45.150 "name": "BaseBdev2", 00:12:45.150 "uuid": "38f0639b-41ee-4d7c-bb4b-b2018d071ebe", 00:12:45.150 "is_configured": true, 00:12:45.150 "data_offset": 0, 00:12:45.150 "data_size": 65536 00:12:45.150 }, 00:12:45.150 { 00:12:45.150 "name": "BaseBdev3", 00:12:45.150 "uuid": "6640fc64-d7d4-4629-a321-ab446a59c212", 00:12:45.150 "is_configured": true, 00:12:45.150 "data_offset": 0, 00:12:45.150 "data_size": 65536 00:12:45.150 }, 00:12:45.150 { 00:12:45.150 "name": "BaseBdev4", 00:12:45.150 "uuid": "67010ef3-a7b5-4407-8bb4-6a96bb9e5728", 00:12:45.150 "is_configured": true, 00:12:45.150 "data_offset": 0, 00:12:45.150 "data_size": 65536 00:12:45.150 } 00:12:45.150 ] 00:12:45.150 }' 00:12:45.150 21:39:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:45.150 21:39:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.410 21:39:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:45.410 21:39:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.410 21:39:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.410 21:39:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:45.410 21:39:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.410 21:39:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:12:45.410 21:39:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:45.410 21:39:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.410 21:39:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.410 21:39:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:12:45.410 21:39:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.410 21:39:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 668d612f-3834-4909-9b6c-7c5e6968c69d 00:12:45.410 21:39:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.410 21:39:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.669 [2024-12-10 21:39:46.225155] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:12:45.669 [2024-12-10 21:39:46.225223] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:45.669 [2024-12-10 21:39:46.225231] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:12:45.669 [2024-12-10 21:39:46.225515] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:12:45.669 [2024-12-10 21:39:46.225664] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:45.669 [2024-12-10 21:39:46.225683] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:12:45.669 [2024-12-10 21:39:46.225959] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:45.669 NewBaseBdev 00:12:45.670 21:39:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.670 21:39:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:12:45.670 21:39:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:12:45.670 21:39:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:45.670 21:39:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:45.670 21:39:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:45.670 21:39:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:45.670 21:39:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:45.670 21:39:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.670 21:39:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.670 21:39:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.670 21:39:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:12:45.670 21:39:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.670 21:39:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.670 [ 00:12:45.670 { 00:12:45.670 "name": "NewBaseBdev", 00:12:45.670 "aliases": [ 00:12:45.670 "668d612f-3834-4909-9b6c-7c5e6968c69d" 00:12:45.670 ], 00:12:45.670 "product_name": "Malloc disk", 00:12:45.670 "block_size": 512, 00:12:45.670 "num_blocks": 65536, 00:12:45.670 "uuid": "668d612f-3834-4909-9b6c-7c5e6968c69d", 00:12:45.670 "assigned_rate_limits": { 00:12:45.670 "rw_ios_per_sec": 0, 00:12:45.670 "rw_mbytes_per_sec": 0, 00:12:45.670 "r_mbytes_per_sec": 0, 00:12:45.670 "w_mbytes_per_sec": 0 00:12:45.670 }, 00:12:45.670 "claimed": true, 00:12:45.670 "claim_type": "exclusive_write", 00:12:45.670 "zoned": false, 00:12:45.670 "supported_io_types": { 00:12:45.670 "read": true, 00:12:45.670 "write": true, 00:12:45.670 "unmap": true, 00:12:45.670 "flush": true, 00:12:45.670 "reset": true, 00:12:45.670 "nvme_admin": false, 00:12:45.670 "nvme_io": false, 00:12:45.670 "nvme_io_md": false, 00:12:45.670 "write_zeroes": true, 00:12:45.670 "zcopy": true, 00:12:45.670 "get_zone_info": false, 00:12:45.670 "zone_management": false, 00:12:45.670 "zone_append": false, 00:12:45.670 "compare": false, 00:12:45.670 "compare_and_write": false, 00:12:45.670 "abort": true, 00:12:45.670 "seek_hole": false, 00:12:45.670 "seek_data": false, 00:12:45.670 "copy": true, 00:12:45.670 "nvme_iov_md": false 00:12:45.670 }, 00:12:45.670 "memory_domains": [ 00:12:45.670 { 00:12:45.670 "dma_device_id": "system", 00:12:45.670 "dma_device_type": 1 00:12:45.670 }, 00:12:45.670 { 00:12:45.670 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:45.670 "dma_device_type": 2 00:12:45.670 } 00:12:45.670 ], 00:12:45.670 "driver_specific": {} 00:12:45.670 } 00:12:45.670 ] 00:12:45.670 21:39:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.670 21:39:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:45.670 21:39:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:12:45.670 21:39:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:45.670 21:39:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:45.670 21:39:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:45.670 21:39:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:45.670 21:39:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:45.670 21:39:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:45.670 21:39:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:45.670 21:39:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:45.670 21:39:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:45.670 21:39:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:45.670 21:39:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:45.670 21:39:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.670 21:39:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.670 21:39:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.670 21:39:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:45.670 "name": "Existed_Raid", 00:12:45.670 "uuid": "418944cd-e516-41a4-8e28-b613fd0168f8", 00:12:45.670 "strip_size_kb": 64, 00:12:45.670 "state": "online", 00:12:45.670 "raid_level": "concat", 00:12:45.670 "superblock": false, 00:12:45.670 "num_base_bdevs": 4, 00:12:45.670 "num_base_bdevs_discovered": 4, 00:12:45.670 "num_base_bdevs_operational": 4, 00:12:45.670 "base_bdevs_list": [ 00:12:45.670 { 00:12:45.670 "name": "NewBaseBdev", 00:12:45.670 "uuid": "668d612f-3834-4909-9b6c-7c5e6968c69d", 00:12:45.670 "is_configured": true, 00:12:45.670 "data_offset": 0, 00:12:45.670 "data_size": 65536 00:12:45.670 }, 00:12:45.670 { 00:12:45.670 "name": "BaseBdev2", 00:12:45.670 "uuid": "38f0639b-41ee-4d7c-bb4b-b2018d071ebe", 00:12:45.670 "is_configured": true, 00:12:45.670 "data_offset": 0, 00:12:45.670 "data_size": 65536 00:12:45.670 }, 00:12:45.670 { 00:12:45.670 "name": "BaseBdev3", 00:12:45.670 "uuid": "6640fc64-d7d4-4629-a321-ab446a59c212", 00:12:45.670 "is_configured": true, 00:12:45.670 "data_offset": 0, 00:12:45.670 "data_size": 65536 00:12:45.670 }, 00:12:45.670 { 00:12:45.670 "name": "BaseBdev4", 00:12:45.670 "uuid": "67010ef3-a7b5-4407-8bb4-6a96bb9e5728", 00:12:45.670 "is_configured": true, 00:12:45.670 "data_offset": 0, 00:12:45.670 "data_size": 65536 00:12:45.670 } 00:12:45.670 ] 00:12:45.670 }' 00:12:45.670 21:39:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:45.670 21:39:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.930 21:39:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:12:45.930 21:39:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:45.930 21:39:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:45.930 21:39:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:45.930 21:39:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:45.930 21:39:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:45.930 21:39:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:45.930 21:39:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:45.930 21:39:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.930 21:39:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.930 [2024-12-10 21:39:46.652871] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:45.930 21:39:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.930 21:39:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:45.930 "name": "Existed_Raid", 00:12:45.930 "aliases": [ 00:12:45.930 "418944cd-e516-41a4-8e28-b613fd0168f8" 00:12:45.930 ], 00:12:45.930 "product_name": "Raid Volume", 00:12:45.930 "block_size": 512, 00:12:45.930 "num_blocks": 262144, 00:12:45.930 "uuid": "418944cd-e516-41a4-8e28-b613fd0168f8", 00:12:45.930 "assigned_rate_limits": { 00:12:45.930 "rw_ios_per_sec": 0, 00:12:45.930 "rw_mbytes_per_sec": 0, 00:12:45.930 "r_mbytes_per_sec": 0, 00:12:45.930 "w_mbytes_per_sec": 0 00:12:45.930 }, 00:12:45.930 "claimed": false, 00:12:45.930 "zoned": false, 00:12:45.930 "supported_io_types": { 00:12:45.930 "read": true, 00:12:45.930 "write": true, 00:12:45.930 "unmap": true, 00:12:45.930 "flush": true, 00:12:45.930 "reset": true, 00:12:45.930 "nvme_admin": false, 00:12:45.930 "nvme_io": false, 00:12:45.930 "nvme_io_md": false, 00:12:45.930 "write_zeroes": true, 00:12:45.930 "zcopy": false, 00:12:45.930 "get_zone_info": false, 00:12:45.930 "zone_management": false, 00:12:45.930 "zone_append": false, 00:12:45.930 "compare": false, 00:12:45.930 "compare_and_write": false, 00:12:45.930 "abort": false, 00:12:45.930 "seek_hole": false, 00:12:45.930 "seek_data": false, 00:12:45.930 "copy": false, 00:12:45.930 "nvme_iov_md": false 00:12:45.930 }, 00:12:45.930 "memory_domains": [ 00:12:45.930 { 00:12:45.930 "dma_device_id": "system", 00:12:45.930 "dma_device_type": 1 00:12:45.930 }, 00:12:45.930 { 00:12:45.930 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:45.930 "dma_device_type": 2 00:12:45.930 }, 00:12:45.930 { 00:12:45.930 "dma_device_id": "system", 00:12:45.930 "dma_device_type": 1 00:12:45.930 }, 00:12:45.930 { 00:12:45.930 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:45.930 "dma_device_type": 2 00:12:45.930 }, 00:12:45.930 { 00:12:45.930 "dma_device_id": "system", 00:12:45.930 "dma_device_type": 1 00:12:45.930 }, 00:12:45.930 { 00:12:45.930 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:45.930 "dma_device_type": 2 00:12:45.930 }, 00:12:45.930 { 00:12:45.930 "dma_device_id": "system", 00:12:45.930 "dma_device_type": 1 00:12:45.930 }, 00:12:45.930 { 00:12:45.930 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:45.930 "dma_device_type": 2 00:12:45.930 } 00:12:45.930 ], 00:12:45.930 "driver_specific": { 00:12:45.930 "raid": { 00:12:45.930 "uuid": "418944cd-e516-41a4-8e28-b613fd0168f8", 00:12:45.930 "strip_size_kb": 64, 00:12:45.930 "state": "online", 00:12:45.930 "raid_level": "concat", 00:12:45.930 "superblock": false, 00:12:45.930 "num_base_bdevs": 4, 00:12:45.930 "num_base_bdevs_discovered": 4, 00:12:45.930 "num_base_bdevs_operational": 4, 00:12:45.930 "base_bdevs_list": [ 00:12:45.930 { 00:12:45.930 "name": "NewBaseBdev", 00:12:45.930 "uuid": "668d612f-3834-4909-9b6c-7c5e6968c69d", 00:12:45.930 "is_configured": true, 00:12:45.930 "data_offset": 0, 00:12:45.930 "data_size": 65536 00:12:45.930 }, 00:12:45.930 { 00:12:45.930 "name": "BaseBdev2", 00:12:45.930 "uuid": "38f0639b-41ee-4d7c-bb4b-b2018d071ebe", 00:12:45.930 "is_configured": true, 00:12:45.930 "data_offset": 0, 00:12:45.930 "data_size": 65536 00:12:45.930 }, 00:12:45.930 { 00:12:45.930 "name": "BaseBdev3", 00:12:45.930 "uuid": "6640fc64-d7d4-4629-a321-ab446a59c212", 00:12:45.930 "is_configured": true, 00:12:45.930 "data_offset": 0, 00:12:45.930 "data_size": 65536 00:12:45.930 }, 00:12:45.930 { 00:12:45.930 "name": "BaseBdev4", 00:12:45.930 "uuid": "67010ef3-a7b5-4407-8bb4-6a96bb9e5728", 00:12:45.930 "is_configured": true, 00:12:45.930 "data_offset": 0, 00:12:45.930 "data_size": 65536 00:12:45.930 } 00:12:45.930 ] 00:12:45.930 } 00:12:45.930 } 00:12:45.930 }' 00:12:45.930 21:39:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:46.190 21:39:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:12:46.190 BaseBdev2 00:12:46.190 BaseBdev3 00:12:46.190 BaseBdev4' 00:12:46.190 21:39:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:46.190 21:39:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:46.190 21:39:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:46.190 21:39:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:12:46.190 21:39:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.190 21:39:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.190 21:39:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:46.190 21:39:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.190 21:39:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:46.190 21:39:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:46.191 21:39:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:46.191 21:39:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:46.191 21:39:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:46.191 21:39:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.191 21:39:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.191 21:39:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.191 21:39:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:46.191 21:39:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:46.191 21:39:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:46.191 21:39:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:46.191 21:39:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.191 21:39:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.191 21:39:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:46.191 21:39:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.191 21:39:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:46.191 21:39:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:46.191 21:39:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:46.191 21:39:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:12:46.191 21:39:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.191 21:39:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.191 21:39:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:46.191 21:39:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.191 21:39:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:46.191 21:39:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:46.191 21:39:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:46.191 21:39:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.191 21:39:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.191 [2024-12-10 21:39:46.967993] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:46.191 [2024-12-10 21:39:46.968032] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:46.191 [2024-12-10 21:39:46.968130] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:46.191 [2024-12-10 21:39:46.968227] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:46.191 [2024-12-10 21:39:46.968246] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:12:46.451 21:39:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.451 21:39:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 71414 00:12:46.451 21:39:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 71414 ']' 00:12:46.451 21:39:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 71414 00:12:46.451 21:39:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:12:46.451 21:39:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:46.451 21:39:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71414 00:12:46.451 killing process with pid 71414 00:12:46.451 21:39:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:46.451 21:39:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:46.451 21:39:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71414' 00:12:46.451 21:39:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 71414 00:12:46.451 [2024-12-10 21:39:47.019829] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:46.451 21:39:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 71414 00:12:46.710 [2024-12-10 21:39:47.445306] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:48.087 ************************************ 00:12:48.087 END TEST raid_state_function_test 00:12:48.087 ************************************ 00:12:48.087 21:39:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:12:48.087 00:12:48.087 real 0m11.706s 00:12:48.087 user 0m18.442s 00:12:48.087 sys 0m2.119s 00:12:48.087 21:39:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:48.087 21:39:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.087 21:39:48 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 4 true 00:12:48.087 21:39:48 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:48.087 21:39:48 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:48.087 21:39:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:48.087 ************************************ 00:12:48.087 START TEST raid_state_function_test_sb 00:12:48.087 ************************************ 00:12:48.087 21:39:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 4 true 00:12:48.087 21:39:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:12:48.087 21:39:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:12:48.087 21:39:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:12:48.087 21:39:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:12:48.087 21:39:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:12:48.087 21:39:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:48.087 21:39:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:12:48.087 21:39:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:48.087 21:39:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:48.087 21:39:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:12:48.087 21:39:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:48.087 21:39:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:48.087 21:39:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:12:48.087 21:39:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:48.087 21:39:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:48.087 21:39:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:12:48.087 21:39:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:48.087 21:39:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:48.087 21:39:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:48.087 21:39:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:12:48.087 21:39:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:12:48.087 21:39:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:12:48.087 21:39:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:12:48.087 21:39:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:12:48.087 21:39:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:12:48.087 21:39:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:12:48.087 21:39:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:12:48.087 21:39:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:12:48.087 21:39:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:12:48.087 21:39:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=72085 00:12:48.087 Process raid pid: 72085 00:12:48.087 21:39:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:12:48.087 21:39:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 72085' 00:12:48.087 21:39:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 72085 00:12:48.087 21:39:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 72085 ']' 00:12:48.087 21:39:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:48.088 21:39:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:48.088 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:48.088 21:39:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:48.088 21:39:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:48.088 21:39:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:48.088 [2024-12-10 21:39:48.855318] Starting SPDK v25.01-pre git sha1 cec5ba284 / DPDK 24.03.0 initialization... 00:12:48.088 [2024-12-10 21:39:48.855478] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:48.347 [2024-12-10 21:39:49.020826] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:48.606 [2024-12-10 21:39:49.145770] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:12:48.606 [2024-12-10 21:39:49.362339] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:48.606 [2024-12-10 21:39:49.362381] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:49.174 21:39:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:49.174 21:39:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:12:49.174 21:39:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:49.174 21:39:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.174 21:39:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:49.174 [2024-12-10 21:39:49.699706] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:49.174 [2024-12-10 21:39:49.699767] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:49.174 [2024-12-10 21:39:49.699779] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:49.174 [2024-12-10 21:39:49.699807] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:49.174 [2024-12-10 21:39:49.699823] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:49.174 [2024-12-10 21:39:49.699834] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:49.174 [2024-12-10 21:39:49.699846] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:49.174 [2024-12-10 21:39:49.699857] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:49.174 21:39:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.174 21:39:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:49.174 21:39:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:49.174 21:39:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:49.174 21:39:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:49.174 21:39:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:49.174 21:39:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:49.174 21:39:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:49.174 21:39:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:49.174 21:39:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:49.174 21:39:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:49.174 21:39:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:49.174 21:39:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:49.174 21:39:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.174 21:39:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:49.174 21:39:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.174 21:39:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:49.174 "name": "Existed_Raid", 00:12:49.174 "uuid": "c95d020f-a313-45a6-a5ab-6a24e0b7b4a6", 00:12:49.174 "strip_size_kb": 64, 00:12:49.174 "state": "configuring", 00:12:49.174 "raid_level": "concat", 00:12:49.174 "superblock": true, 00:12:49.174 "num_base_bdevs": 4, 00:12:49.174 "num_base_bdevs_discovered": 0, 00:12:49.174 "num_base_bdevs_operational": 4, 00:12:49.175 "base_bdevs_list": [ 00:12:49.175 { 00:12:49.175 "name": "BaseBdev1", 00:12:49.175 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:49.175 "is_configured": false, 00:12:49.175 "data_offset": 0, 00:12:49.175 "data_size": 0 00:12:49.175 }, 00:12:49.175 { 00:12:49.175 "name": "BaseBdev2", 00:12:49.175 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:49.175 "is_configured": false, 00:12:49.175 "data_offset": 0, 00:12:49.175 "data_size": 0 00:12:49.175 }, 00:12:49.175 { 00:12:49.175 "name": "BaseBdev3", 00:12:49.175 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:49.175 "is_configured": false, 00:12:49.175 "data_offset": 0, 00:12:49.175 "data_size": 0 00:12:49.175 }, 00:12:49.175 { 00:12:49.175 "name": "BaseBdev4", 00:12:49.175 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:49.175 "is_configured": false, 00:12:49.175 "data_offset": 0, 00:12:49.175 "data_size": 0 00:12:49.175 } 00:12:49.175 ] 00:12:49.175 }' 00:12:49.175 21:39:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:49.175 21:39:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:49.434 21:39:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:49.434 21:39:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.434 21:39:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:49.434 [2024-12-10 21:39:50.178829] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:49.434 [2024-12-10 21:39:50.178875] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:12:49.434 21:39:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.434 21:39:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:49.434 21:39:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.434 21:39:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:49.434 [2024-12-10 21:39:50.190818] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:49.434 [2024-12-10 21:39:50.190866] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:49.434 [2024-12-10 21:39:50.190876] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:49.434 [2024-12-10 21:39:50.190885] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:49.434 [2024-12-10 21:39:50.190891] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:49.434 [2024-12-10 21:39:50.190900] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:49.434 [2024-12-10 21:39:50.190906] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:49.434 [2024-12-10 21:39:50.190915] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:49.435 21:39:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.435 21:39:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:49.435 21:39:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.435 21:39:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:49.692 [2024-12-10 21:39:50.240934] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:49.692 BaseBdev1 00:12:49.693 21:39:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.693 21:39:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:12:49.693 21:39:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:12:49.693 21:39:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:49.693 21:39:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:49.693 21:39:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:49.693 21:39:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:49.693 21:39:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:49.693 21:39:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.693 21:39:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:49.693 21:39:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.693 21:39:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:49.693 21:39:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.693 21:39:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:49.693 [ 00:12:49.693 { 00:12:49.693 "name": "BaseBdev1", 00:12:49.693 "aliases": [ 00:12:49.693 "9f9c3dd8-7e9a-46e4-bf36-965219803ed2" 00:12:49.693 ], 00:12:49.693 "product_name": "Malloc disk", 00:12:49.693 "block_size": 512, 00:12:49.693 "num_blocks": 65536, 00:12:49.693 "uuid": "9f9c3dd8-7e9a-46e4-bf36-965219803ed2", 00:12:49.693 "assigned_rate_limits": { 00:12:49.693 "rw_ios_per_sec": 0, 00:12:49.693 "rw_mbytes_per_sec": 0, 00:12:49.693 "r_mbytes_per_sec": 0, 00:12:49.693 "w_mbytes_per_sec": 0 00:12:49.693 }, 00:12:49.693 "claimed": true, 00:12:49.693 "claim_type": "exclusive_write", 00:12:49.693 "zoned": false, 00:12:49.693 "supported_io_types": { 00:12:49.693 "read": true, 00:12:49.693 "write": true, 00:12:49.693 "unmap": true, 00:12:49.693 "flush": true, 00:12:49.693 "reset": true, 00:12:49.693 "nvme_admin": false, 00:12:49.693 "nvme_io": false, 00:12:49.693 "nvme_io_md": false, 00:12:49.693 "write_zeroes": true, 00:12:49.693 "zcopy": true, 00:12:49.693 "get_zone_info": false, 00:12:49.693 "zone_management": false, 00:12:49.693 "zone_append": false, 00:12:49.693 "compare": false, 00:12:49.693 "compare_and_write": false, 00:12:49.693 "abort": true, 00:12:49.693 "seek_hole": false, 00:12:49.693 "seek_data": false, 00:12:49.693 "copy": true, 00:12:49.693 "nvme_iov_md": false 00:12:49.693 }, 00:12:49.693 "memory_domains": [ 00:12:49.693 { 00:12:49.693 "dma_device_id": "system", 00:12:49.693 "dma_device_type": 1 00:12:49.693 }, 00:12:49.693 { 00:12:49.693 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:49.693 "dma_device_type": 2 00:12:49.693 } 00:12:49.693 ], 00:12:49.693 "driver_specific": {} 00:12:49.693 } 00:12:49.693 ] 00:12:49.693 21:39:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.693 21:39:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:49.693 21:39:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:49.693 21:39:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:49.693 21:39:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:49.693 21:39:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:49.693 21:39:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:49.693 21:39:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:49.693 21:39:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:49.693 21:39:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:49.693 21:39:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:49.693 21:39:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:49.693 21:39:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:49.693 21:39:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.693 21:39:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:49.693 21:39:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:49.693 21:39:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.693 21:39:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:49.693 "name": "Existed_Raid", 00:12:49.693 "uuid": "aee2e4c9-f44e-4aca-90d3-07f5e55b63fd", 00:12:49.693 "strip_size_kb": 64, 00:12:49.693 "state": "configuring", 00:12:49.693 "raid_level": "concat", 00:12:49.693 "superblock": true, 00:12:49.693 "num_base_bdevs": 4, 00:12:49.693 "num_base_bdevs_discovered": 1, 00:12:49.693 "num_base_bdevs_operational": 4, 00:12:49.693 "base_bdevs_list": [ 00:12:49.693 { 00:12:49.693 "name": "BaseBdev1", 00:12:49.693 "uuid": "9f9c3dd8-7e9a-46e4-bf36-965219803ed2", 00:12:49.693 "is_configured": true, 00:12:49.693 "data_offset": 2048, 00:12:49.693 "data_size": 63488 00:12:49.693 }, 00:12:49.693 { 00:12:49.693 "name": "BaseBdev2", 00:12:49.693 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:49.693 "is_configured": false, 00:12:49.693 "data_offset": 0, 00:12:49.693 "data_size": 0 00:12:49.693 }, 00:12:49.693 { 00:12:49.693 "name": "BaseBdev3", 00:12:49.693 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:49.693 "is_configured": false, 00:12:49.693 "data_offset": 0, 00:12:49.693 "data_size": 0 00:12:49.693 }, 00:12:49.693 { 00:12:49.693 "name": "BaseBdev4", 00:12:49.693 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:49.693 "is_configured": false, 00:12:49.693 "data_offset": 0, 00:12:49.693 "data_size": 0 00:12:49.693 } 00:12:49.693 ] 00:12:49.693 }' 00:12:49.693 21:39:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:49.693 21:39:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:50.262 21:39:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:50.262 21:39:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.262 21:39:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:50.262 [2024-12-10 21:39:50.764151] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:50.262 [2024-12-10 21:39:50.764225] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:12:50.262 21:39:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.262 21:39:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:50.262 21:39:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.262 21:39:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:50.262 [2024-12-10 21:39:50.776217] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:50.262 [2024-12-10 21:39:50.778173] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:50.262 [2024-12-10 21:39:50.778221] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:50.262 [2024-12-10 21:39:50.778231] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:50.262 [2024-12-10 21:39:50.778242] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:50.262 [2024-12-10 21:39:50.778249] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:50.262 [2024-12-10 21:39:50.778258] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:50.262 21:39:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.262 21:39:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:12:50.262 21:39:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:50.262 21:39:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:50.262 21:39:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:50.262 21:39:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:50.262 21:39:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:50.262 21:39:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:50.262 21:39:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:50.262 21:39:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:50.262 21:39:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:50.262 21:39:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:50.262 21:39:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:50.262 21:39:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:50.262 21:39:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.262 21:39:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:50.262 21:39:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:50.262 21:39:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.262 21:39:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:50.262 "name": "Existed_Raid", 00:12:50.262 "uuid": "1b7447e4-0c85-471f-9b22-9468e3f294e9", 00:12:50.262 "strip_size_kb": 64, 00:12:50.262 "state": "configuring", 00:12:50.262 "raid_level": "concat", 00:12:50.262 "superblock": true, 00:12:50.262 "num_base_bdevs": 4, 00:12:50.262 "num_base_bdevs_discovered": 1, 00:12:50.262 "num_base_bdevs_operational": 4, 00:12:50.262 "base_bdevs_list": [ 00:12:50.262 { 00:12:50.262 "name": "BaseBdev1", 00:12:50.262 "uuid": "9f9c3dd8-7e9a-46e4-bf36-965219803ed2", 00:12:50.262 "is_configured": true, 00:12:50.262 "data_offset": 2048, 00:12:50.262 "data_size": 63488 00:12:50.262 }, 00:12:50.262 { 00:12:50.262 "name": "BaseBdev2", 00:12:50.262 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:50.262 "is_configured": false, 00:12:50.262 "data_offset": 0, 00:12:50.262 "data_size": 0 00:12:50.262 }, 00:12:50.262 { 00:12:50.262 "name": "BaseBdev3", 00:12:50.262 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:50.262 "is_configured": false, 00:12:50.262 "data_offset": 0, 00:12:50.262 "data_size": 0 00:12:50.262 }, 00:12:50.262 { 00:12:50.262 "name": "BaseBdev4", 00:12:50.262 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:50.262 "is_configured": false, 00:12:50.262 "data_offset": 0, 00:12:50.262 "data_size": 0 00:12:50.262 } 00:12:50.262 ] 00:12:50.262 }' 00:12:50.262 21:39:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:50.262 21:39:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:50.521 21:39:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:50.521 21:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.521 21:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:50.521 [2024-12-10 21:39:51.292039] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:50.521 BaseBdev2 00:12:50.521 21:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.521 21:39:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:12:50.521 21:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:12:50.521 21:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:50.521 21:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:50.521 21:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:50.521 21:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:50.521 21:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:50.521 21:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.521 21:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:50.780 21:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.780 21:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:50.780 21:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.780 21:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:50.780 [ 00:12:50.780 { 00:12:50.780 "name": "BaseBdev2", 00:12:50.780 "aliases": [ 00:12:50.780 "02e176c0-74c4-4dcb-a85f-1ed6cbaa9240" 00:12:50.780 ], 00:12:50.780 "product_name": "Malloc disk", 00:12:50.780 "block_size": 512, 00:12:50.780 "num_blocks": 65536, 00:12:50.780 "uuid": "02e176c0-74c4-4dcb-a85f-1ed6cbaa9240", 00:12:50.780 "assigned_rate_limits": { 00:12:50.780 "rw_ios_per_sec": 0, 00:12:50.780 "rw_mbytes_per_sec": 0, 00:12:50.780 "r_mbytes_per_sec": 0, 00:12:50.780 "w_mbytes_per_sec": 0 00:12:50.780 }, 00:12:50.780 "claimed": true, 00:12:50.780 "claim_type": "exclusive_write", 00:12:50.780 "zoned": false, 00:12:50.780 "supported_io_types": { 00:12:50.780 "read": true, 00:12:50.780 "write": true, 00:12:50.780 "unmap": true, 00:12:50.780 "flush": true, 00:12:50.780 "reset": true, 00:12:50.780 "nvme_admin": false, 00:12:50.780 "nvme_io": false, 00:12:50.780 "nvme_io_md": false, 00:12:50.780 "write_zeroes": true, 00:12:50.780 "zcopy": true, 00:12:50.780 "get_zone_info": false, 00:12:50.780 "zone_management": false, 00:12:50.780 "zone_append": false, 00:12:50.780 "compare": false, 00:12:50.780 "compare_and_write": false, 00:12:50.780 "abort": true, 00:12:50.780 "seek_hole": false, 00:12:50.780 "seek_data": false, 00:12:50.780 "copy": true, 00:12:50.780 "nvme_iov_md": false 00:12:50.780 }, 00:12:50.780 "memory_domains": [ 00:12:50.780 { 00:12:50.780 "dma_device_id": "system", 00:12:50.780 "dma_device_type": 1 00:12:50.780 }, 00:12:50.780 { 00:12:50.780 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:50.780 "dma_device_type": 2 00:12:50.780 } 00:12:50.780 ], 00:12:50.780 "driver_specific": {} 00:12:50.780 } 00:12:50.780 ] 00:12:50.780 21:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.780 21:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:50.780 21:39:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:50.780 21:39:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:50.780 21:39:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:50.780 21:39:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:50.780 21:39:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:50.780 21:39:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:50.780 21:39:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:50.780 21:39:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:50.781 21:39:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:50.781 21:39:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:50.781 21:39:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:50.781 21:39:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:50.781 21:39:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:50.781 21:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.781 21:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:50.781 21:39:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:50.781 21:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.781 21:39:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:50.781 "name": "Existed_Raid", 00:12:50.781 "uuid": "1b7447e4-0c85-471f-9b22-9468e3f294e9", 00:12:50.781 "strip_size_kb": 64, 00:12:50.781 "state": "configuring", 00:12:50.781 "raid_level": "concat", 00:12:50.781 "superblock": true, 00:12:50.781 "num_base_bdevs": 4, 00:12:50.781 "num_base_bdevs_discovered": 2, 00:12:50.781 "num_base_bdevs_operational": 4, 00:12:50.781 "base_bdevs_list": [ 00:12:50.781 { 00:12:50.781 "name": "BaseBdev1", 00:12:50.781 "uuid": "9f9c3dd8-7e9a-46e4-bf36-965219803ed2", 00:12:50.781 "is_configured": true, 00:12:50.781 "data_offset": 2048, 00:12:50.781 "data_size": 63488 00:12:50.781 }, 00:12:50.781 { 00:12:50.781 "name": "BaseBdev2", 00:12:50.781 "uuid": "02e176c0-74c4-4dcb-a85f-1ed6cbaa9240", 00:12:50.781 "is_configured": true, 00:12:50.781 "data_offset": 2048, 00:12:50.781 "data_size": 63488 00:12:50.781 }, 00:12:50.781 { 00:12:50.781 "name": "BaseBdev3", 00:12:50.781 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:50.781 "is_configured": false, 00:12:50.781 "data_offset": 0, 00:12:50.781 "data_size": 0 00:12:50.781 }, 00:12:50.781 { 00:12:50.781 "name": "BaseBdev4", 00:12:50.781 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:50.781 "is_configured": false, 00:12:50.781 "data_offset": 0, 00:12:50.781 "data_size": 0 00:12:50.781 } 00:12:50.781 ] 00:12:50.781 }' 00:12:50.781 21:39:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:50.781 21:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.347 21:39:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:51.347 21:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.347 21:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.347 [2024-12-10 21:39:51.896653] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:51.347 BaseBdev3 00:12:51.348 21:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.348 21:39:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:12:51.348 21:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:12:51.348 21:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:51.348 21:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:51.348 21:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:51.348 21:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:51.348 21:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:51.348 21:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.348 21:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.348 21:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.348 21:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:51.348 21:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.348 21:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.348 [ 00:12:51.348 { 00:12:51.348 "name": "BaseBdev3", 00:12:51.348 "aliases": [ 00:12:51.348 "f50700e1-d897-42a7-be1e-81881bc12715" 00:12:51.348 ], 00:12:51.348 "product_name": "Malloc disk", 00:12:51.348 "block_size": 512, 00:12:51.348 "num_blocks": 65536, 00:12:51.348 "uuid": "f50700e1-d897-42a7-be1e-81881bc12715", 00:12:51.348 "assigned_rate_limits": { 00:12:51.348 "rw_ios_per_sec": 0, 00:12:51.348 "rw_mbytes_per_sec": 0, 00:12:51.348 "r_mbytes_per_sec": 0, 00:12:51.348 "w_mbytes_per_sec": 0 00:12:51.348 }, 00:12:51.348 "claimed": true, 00:12:51.348 "claim_type": "exclusive_write", 00:12:51.348 "zoned": false, 00:12:51.348 "supported_io_types": { 00:12:51.348 "read": true, 00:12:51.348 "write": true, 00:12:51.348 "unmap": true, 00:12:51.348 "flush": true, 00:12:51.348 "reset": true, 00:12:51.348 "nvme_admin": false, 00:12:51.348 "nvme_io": false, 00:12:51.348 "nvme_io_md": false, 00:12:51.348 "write_zeroes": true, 00:12:51.348 "zcopy": true, 00:12:51.348 "get_zone_info": false, 00:12:51.348 "zone_management": false, 00:12:51.348 "zone_append": false, 00:12:51.348 "compare": false, 00:12:51.348 "compare_and_write": false, 00:12:51.348 "abort": true, 00:12:51.348 "seek_hole": false, 00:12:51.348 "seek_data": false, 00:12:51.348 "copy": true, 00:12:51.348 "nvme_iov_md": false 00:12:51.348 }, 00:12:51.348 "memory_domains": [ 00:12:51.348 { 00:12:51.348 "dma_device_id": "system", 00:12:51.348 "dma_device_type": 1 00:12:51.348 }, 00:12:51.348 { 00:12:51.348 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:51.348 "dma_device_type": 2 00:12:51.348 } 00:12:51.348 ], 00:12:51.348 "driver_specific": {} 00:12:51.348 } 00:12:51.348 ] 00:12:51.348 21:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.348 21:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:51.348 21:39:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:51.348 21:39:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:51.348 21:39:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:51.348 21:39:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:51.348 21:39:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:51.348 21:39:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:51.348 21:39:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:51.348 21:39:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:51.348 21:39:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:51.348 21:39:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:51.348 21:39:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:51.348 21:39:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:51.348 21:39:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:51.348 21:39:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:51.348 21:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.348 21:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.348 21:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.348 21:39:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:51.348 "name": "Existed_Raid", 00:12:51.348 "uuid": "1b7447e4-0c85-471f-9b22-9468e3f294e9", 00:12:51.348 "strip_size_kb": 64, 00:12:51.348 "state": "configuring", 00:12:51.348 "raid_level": "concat", 00:12:51.348 "superblock": true, 00:12:51.348 "num_base_bdevs": 4, 00:12:51.348 "num_base_bdevs_discovered": 3, 00:12:51.348 "num_base_bdevs_operational": 4, 00:12:51.348 "base_bdevs_list": [ 00:12:51.348 { 00:12:51.348 "name": "BaseBdev1", 00:12:51.348 "uuid": "9f9c3dd8-7e9a-46e4-bf36-965219803ed2", 00:12:51.348 "is_configured": true, 00:12:51.348 "data_offset": 2048, 00:12:51.348 "data_size": 63488 00:12:51.348 }, 00:12:51.348 { 00:12:51.348 "name": "BaseBdev2", 00:12:51.348 "uuid": "02e176c0-74c4-4dcb-a85f-1ed6cbaa9240", 00:12:51.348 "is_configured": true, 00:12:51.348 "data_offset": 2048, 00:12:51.348 "data_size": 63488 00:12:51.348 }, 00:12:51.348 { 00:12:51.348 "name": "BaseBdev3", 00:12:51.348 "uuid": "f50700e1-d897-42a7-be1e-81881bc12715", 00:12:51.348 "is_configured": true, 00:12:51.348 "data_offset": 2048, 00:12:51.348 "data_size": 63488 00:12:51.348 }, 00:12:51.348 { 00:12:51.348 "name": "BaseBdev4", 00:12:51.348 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:51.348 "is_configured": false, 00:12:51.348 "data_offset": 0, 00:12:51.348 "data_size": 0 00:12:51.348 } 00:12:51.348 ] 00:12:51.348 }' 00:12:51.348 21:39:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:51.348 21:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.916 21:39:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:12:51.916 21:39:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.916 21:39:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.916 [2024-12-10 21:39:52.435352] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:51.916 [2024-12-10 21:39:52.435688] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:51.916 [2024-12-10 21:39:52.435707] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:12:51.916 [2024-12-10 21:39:52.436038] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:51.916 BaseBdev4 00:12:51.916 [2024-12-10 21:39:52.436213] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:51.916 [2024-12-10 21:39:52.436232] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:12:51.916 [2024-12-10 21:39:52.436407] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:51.916 21:39:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.916 21:39:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:12:51.916 21:39:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:12:51.916 21:39:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:51.916 21:39:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:51.916 21:39:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:51.916 21:39:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:51.916 21:39:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:51.916 21:39:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.916 21:39:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.916 21:39:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.916 21:39:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:51.916 21:39:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.916 21:39:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.916 [ 00:12:51.916 { 00:12:51.916 "name": "BaseBdev4", 00:12:51.916 "aliases": [ 00:12:51.916 "97227237-88d1-44d6-8535-aa156909f4a5" 00:12:51.916 ], 00:12:51.916 "product_name": "Malloc disk", 00:12:51.916 "block_size": 512, 00:12:51.916 "num_blocks": 65536, 00:12:51.916 "uuid": "97227237-88d1-44d6-8535-aa156909f4a5", 00:12:51.916 "assigned_rate_limits": { 00:12:51.916 "rw_ios_per_sec": 0, 00:12:51.916 "rw_mbytes_per_sec": 0, 00:12:51.916 "r_mbytes_per_sec": 0, 00:12:51.916 "w_mbytes_per_sec": 0 00:12:51.916 }, 00:12:51.916 "claimed": true, 00:12:51.916 "claim_type": "exclusive_write", 00:12:51.916 "zoned": false, 00:12:51.916 "supported_io_types": { 00:12:51.916 "read": true, 00:12:51.916 "write": true, 00:12:51.916 "unmap": true, 00:12:51.916 "flush": true, 00:12:51.916 "reset": true, 00:12:51.916 "nvme_admin": false, 00:12:51.916 "nvme_io": false, 00:12:51.916 "nvme_io_md": false, 00:12:51.916 "write_zeroes": true, 00:12:51.916 "zcopy": true, 00:12:51.916 "get_zone_info": false, 00:12:51.916 "zone_management": false, 00:12:51.916 "zone_append": false, 00:12:51.916 "compare": false, 00:12:51.916 "compare_and_write": false, 00:12:51.916 "abort": true, 00:12:51.916 "seek_hole": false, 00:12:51.916 "seek_data": false, 00:12:51.916 "copy": true, 00:12:51.916 "nvme_iov_md": false 00:12:51.916 }, 00:12:51.916 "memory_domains": [ 00:12:51.916 { 00:12:51.916 "dma_device_id": "system", 00:12:51.916 "dma_device_type": 1 00:12:51.916 }, 00:12:51.916 { 00:12:51.916 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:51.916 "dma_device_type": 2 00:12:51.916 } 00:12:51.916 ], 00:12:51.916 "driver_specific": {} 00:12:51.916 } 00:12:51.916 ] 00:12:51.916 21:39:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.916 21:39:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:51.916 21:39:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:51.916 21:39:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:51.916 21:39:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:12:51.916 21:39:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:51.916 21:39:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:51.916 21:39:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:51.916 21:39:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:51.916 21:39:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:51.916 21:39:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:51.916 21:39:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:51.916 21:39:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:51.916 21:39:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:51.916 21:39:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:51.917 21:39:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:51.917 21:39:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.917 21:39:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.917 21:39:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.917 21:39:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:51.917 "name": "Existed_Raid", 00:12:51.917 "uuid": "1b7447e4-0c85-471f-9b22-9468e3f294e9", 00:12:51.917 "strip_size_kb": 64, 00:12:51.917 "state": "online", 00:12:51.917 "raid_level": "concat", 00:12:51.917 "superblock": true, 00:12:51.917 "num_base_bdevs": 4, 00:12:51.917 "num_base_bdevs_discovered": 4, 00:12:51.917 "num_base_bdevs_operational": 4, 00:12:51.917 "base_bdevs_list": [ 00:12:51.917 { 00:12:51.917 "name": "BaseBdev1", 00:12:51.917 "uuid": "9f9c3dd8-7e9a-46e4-bf36-965219803ed2", 00:12:51.917 "is_configured": true, 00:12:51.917 "data_offset": 2048, 00:12:51.917 "data_size": 63488 00:12:51.917 }, 00:12:51.917 { 00:12:51.917 "name": "BaseBdev2", 00:12:51.917 "uuid": "02e176c0-74c4-4dcb-a85f-1ed6cbaa9240", 00:12:51.917 "is_configured": true, 00:12:51.917 "data_offset": 2048, 00:12:51.917 "data_size": 63488 00:12:51.917 }, 00:12:51.917 { 00:12:51.917 "name": "BaseBdev3", 00:12:51.917 "uuid": "f50700e1-d897-42a7-be1e-81881bc12715", 00:12:51.917 "is_configured": true, 00:12:51.917 "data_offset": 2048, 00:12:51.917 "data_size": 63488 00:12:51.917 }, 00:12:51.917 { 00:12:51.917 "name": "BaseBdev4", 00:12:51.917 "uuid": "97227237-88d1-44d6-8535-aa156909f4a5", 00:12:51.917 "is_configured": true, 00:12:51.917 "data_offset": 2048, 00:12:51.917 "data_size": 63488 00:12:51.917 } 00:12:51.917 ] 00:12:51.917 }' 00:12:51.917 21:39:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:51.917 21:39:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.198 21:39:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:12:52.198 21:39:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:52.198 21:39:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:52.198 21:39:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:52.198 21:39:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:12:52.199 21:39:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:52.199 21:39:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:52.199 21:39:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:52.199 21:39:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.199 21:39:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.199 [2024-12-10 21:39:52.934983] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:52.199 21:39:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.199 21:39:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:52.199 "name": "Existed_Raid", 00:12:52.199 "aliases": [ 00:12:52.199 "1b7447e4-0c85-471f-9b22-9468e3f294e9" 00:12:52.199 ], 00:12:52.199 "product_name": "Raid Volume", 00:12:52.199 "block_size": 512, 00:12:52.199 "num_blocks": 253952, 00:12:52.199 "uuid": "1b7447e4-0c85-471f-9b22-9468e3f294e9", 00:12:52.199 "assigned_rate_limits": { 00:12:52.199 "rw_ios_per_sec": 0, 00:12:52.199 "rw_mbytes_per_sec": 0, 00:12:52.199 "r_mbytes_per_sec": 0, 00:12:52.199 "w_mbytes_per_sec": 0 00:12:52.199 }, 00:12:52.199 "claimed": false, 00:12:52.199 "zoned": false, 00:12:52.199 "supported_io_types": { 00:12:52.199 "read": true, 00:12:52.199 "write": true, 00:12:52.199 "unmap": true, 00:12:52.199 "flush": true, 00:12:52.199 "reset": true, 00:12:52.199 "nvme_admin": false, 00:12:52.199 "nvme_io": false, 00:12:52.199 "nvme_io_md": false, 00:12:52.199 "write_zeroes": true, 00:12:52.199 "zcopy": false, 00:12:52.199 "get_zone_info": false, 00:12:52.199 "zone_management": false, 00:12:52.199 "zone_append": false, 00:12:52.199 "compare": false, 00:12:52.199 "compare_and_write": false, 00:12:52.199 "abort": false, 00:12:52.199 "seek_hole": false, 00:12:52.199 "seek_data": false, 00:12:52.199 "copy": false, 00:12:52.199 "nvme_iov_md": false 00:12:52.199 }, 00:12:52.199 "memory_domains": [ 00:12:52.199 { 00:12:52.199 "dma_device_id": "system", 00:12:52.199 "dma_device_type": 1 00:12:52.199 }, 00:12:52.199 { 00:12:52.199 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:52.199 "dma_device_type": 2 00:12:52.199 }, 00:12:52.199 { 00:12:52.199 "dma_device_id": "system", 00:12:52.199 "dma_device_type": 1 00:12:52.199 }, 00:12:52.199 { 00:12:52.199 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:52.199 "dma_device_type": 2 00:12:52.199 }, 00:12:52.199 { 00:12:52.199 "dma_device_id": "system", 00:12:52.199 "dma_device_type": 1 00:12:52.199 }, 00:12:52.199 { 00:12:52.199 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:52.199 "dma_device_type": 2 00:12:52.199 }, 00:12:52.199 { 00:12:52.199 "dma_device_id": "system", 00:12:52.199 "dma_device_type": 1 00:12:52.199 }, 00:12:52.199 { 00:12:52.199 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:52.199 "dma_device_type": 2 00:12:52.199 } 00:12:52.199 ], 00:12:52.199 "driver_specific": { 00:12:52.199 "raid": { 00:12:52.199 "uuid": "1b7447e4-0c85-471f-9b22-9468e3f294e9", 00:12:52.199 "strip_size_kb": 64, 00:12:52.199 "state": "online", 00:12:52.199 "raid_level": "concat", 00:12:52.199 "superblock": true, 00:12:52.199 "num_base_bdevs": 4, 00:12:52.199 "num_base_bdevs_discovered": 4, 00:12:52.199 "num_base_bdevs_operational": 4, 00:12:52.199 "base_bdevs_list": [ 00:12:52.199 { 00:12:52.199 "name": "BaseBdev1", 00:12:52.199 "uuid": "9f9c3dd8-7e9a-46e4-bf36-965219803ed2", 00:12:52.199 "is_configured": true, 00:12:52.199 "data_offset": 2048, 00:12:52.199 "data_size": 63488 00:12:52.199 }, 00:12:52.199 { 00:12:52.199 "name": "BaseBdev2", 00:12:52.199 "uuid": "02e176c0-74c4-4dcb-a85f-1ed6cbaa9240", 00:12:52.199 "is_configured": true, 00:12:52.199 "data_offset": 2048, 00:12:52.199 "data_size": 63488 00:12:52.199 }, 00:12:52.199 { 00:12:52.199 "name": "BaseBdev3", 00:12:52.199 "uuid": "f50700e1-d897-42a7-be1e-81881bc12715", 00:12:52.199 "is_configured": true, 00:12:52.199 "data_offset": 2048, 00:12:52.199 "data_size": 63488 00:12:52.199 }, 00:12:52.199 { 00:12:52.199 "name": "BaseBdev4", 00:12:52.199 "uuid": "97227237-88d1-44d6-8535-aa156909f4a5", 00:12:52.199 "is_configured": true, 00:12:52.199 "data_offset": 2048, 00:12:52.199 "data_size": 63488 00:12:52.199 } 00:12:52.199 ] 00:12:52.199 } 00:12:52.199 } 00:12:52.199 }' 00:12:52.199 21:39:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:52.458 21:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:12:52.458 BaseBdev2 00:12:52.458 BaseBdev3 00:12:52.458 BaseBdev4' 00:12:52.458 21:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:52.458 21:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:52.458 21:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:52.458 21:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:12:52.458 21:39:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.458 21:39:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.458 21:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:52.458 21:39:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.458 21:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:52.458 21:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:52.458 21:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:52.458 21:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:52.458 21:39:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.458 21:39:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.458 21:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:52.458 21:39:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.458 21:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:52.458 21:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:52.458 21:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:52.458 21:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:52.458 21:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:52.458 21:39:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.458 21:39:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.458 21:39:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.458 21:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:52.458 21:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:52.458 21:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:52.459 21:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:52.459 21:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:12:52.459 21:39:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.459 21:39:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.459 21:39:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.459 21:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:52.459 21:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:52.459 21:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:52.459 21:39:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.459 21:39:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.459 [2024-12-10 21:39:53.238190] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:52.459 [2024-12-10 21:39:53.238232] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:52.459 [2024-12-10 21:39:53.238301] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:52.719 21:39:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.719 21:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:12:52.719 21:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:12:52.719 21:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:52.719 21:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:12:52.719 21:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:12:52.719 21:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:12:52.719 21:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:52.719 21:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:12:52.719 21:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:52.719 21:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:52.719 21:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:52.719 21:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:52.719 21:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:52.719 21:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:52.719 21:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:52.719 21:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:52.719 21:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:52.719 21:39:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.719 21:39:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.719 21:39:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.719 21:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:52.719 "name": "Existed_Raid", 00:12:52.719 "uuid": "1b7447e4-0c85-471f-9b22-9468e3f294e9", 00:12:52.719 "strip_size_kb": 64, 00:12:52.719 "state": "offline", 00:12:52.719 "raid_level": "concat", 00:12:52.719 "superblock": true, 00:12:52.719 "num_base_bdevs": 4, 00:12:52.719 "num_base_bdevs_discovered": 3, 00:12:52.719 "num_base_bdevs_operational": 3, 00:12:52.719 "base_bdevs_list": [ 00:12:52.719 { 00:12:52.719 "name": null, 00:12:52.719 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:52.719 "is_configured": false, 00:12:52.719 "data_offset": 0, 00:12:52.719 "data_size": 63488 00:12:52.719 }, 00:12:52.719 { 00:12:52.719 "name": "BaseBdev2", 00:12:52.719 "uuid": "02e176c0-74c4-4dcb-a85f-1ed6cbaa9240", 00:12:52.719 "is_configured": true, 00:12:52.719 "data_offset": 2048, 00:12:52.719 "data_size": 63488 00:12:52.719 }, 00:12:52.719 { 00:12:52.719 "name": "BaseBdev3", 00:12:52.719 "uuid": "f50700e1-d897-42a7-be1e-81881bc12715", 00:12:52.719 "is_configured": true, 00:12:52.719 "data_offset": 2048, 00:12:52.719 "data_size": 63488 00:12:52.719 }, 00:12:52.719 { 00:12:52.719 "name": "BaseBdev4", 00:12:52.719 "uuid": "97227237-88d1-44d6-8535-aa156909f4a5", 00:12:52.719 "is_configured": true, 00:12:52.719 "data_offset": 2048, 00:12:52.719 "data_size": 63488 00:12:52.719 } 00:12:52.719 ] 00:12:52.719 }' 00:12:52.719 21:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:52.719 21:39:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:53.287 21:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:12:53.287 21:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:53.287 21:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:53.287 21:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:53.287 21:39:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.287 21:39:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:53.287 21:39:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.287 21:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:53.287 21:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:53.287 21:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:12:53.287 21:39:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.287 21:39:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:53.287 [2024-12-10 21:39:53.883447] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:53.287 21:39:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.287 21:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:53.287 21:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:53.287 21:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:53.287 21:39:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.287 21:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:53.287 21:39:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:53.287 21:39:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.287 21:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:53.287 21:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:53.287 21:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:12:53.287 21:39:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.287 21:39:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:53.287 [2024-12-10 21:39:54.057471] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:53.547 21:39:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.547 21:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:53.547 21:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:53.547 21:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:53.547 21:39:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.547 21:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:53.547 21:39:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:53.547 21:39:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.547 21:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:53.547 21:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:53.547 21:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:12:53.547 21:39:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.547 21:39:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:53.547 [2024-12-10 21:39:54.227472] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:12:53.547 [2024-12-10 21:39:54.227532] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:12:53.807 21:39:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.807 21:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:53.807 21:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:53.807 21:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:53.807 21:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:12:53.807 21:39:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.807 21:39:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:53.807 21:39:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.807 21:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:12:53.807 21:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:12:53.807 21:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:12:53.807 21:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:12:53.807 21:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:53.807 21:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:53.807 21:39:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.807 21:39:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:53.807 BaseBdev2 00:12:53.807 21:39:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.807 21:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:12:53.807 21:39:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:12:53.807 21:39:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:53.807 21:39:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:53.807 21:39:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:53.807 21:39:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:53.807 21:39:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:53.807 21:39:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.808 21:39:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:53.808 21:39:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.808 21:39:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:53.808 21:39:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.808 21:39:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:53.808 [ 00:12:53.808 { 00:12:53.808 "name": "BaseBdev2", 00:12:53.808 "aliases": [ 00:12:53.808 "41cc5cd4-c6ea-439a-901c-11de8796441b" 00:12:53.808 ], 00:12:53.808 "product_name": "Malloc disk", 00:12:53.808 "block_size": 512, 00:12:53.808 "num_blocks": 65536, 00:12:53.808 "uuid": "41cc5cd4-c6ea-439a-901c-11de8796441b", 00:12:53.808 "assigned_rate_limits": { 00:12:53.808 "rw_ios_per_sec": 0, 00:12:53.808 "rw_mbytes_per_sec": 0, 00:12:53.808 "r_mbytes_per_sec": 0, 00:12:53.808 "w_mbytes_per_sec": 0 00:12:53.808 }, 00:12:53.808 "claimed": false, 00:12:53.808 "zoned": false, 00:12:53.808 "supported_io_types": { 00:12:53.808 "read": true, 00:12:53.808 "write": true, 00:12:53.808 "unmap": true, 00:12:53.808 "flush": true, 00:12:53.808 "reset": true, 00:12:53.808 "nvme_admin": false, 00:12:53.808 "nvme_io": false, 00:12:53.808 "nvme_io_md": false, 00:12:53.808 "write_zeroes": true, 00:12:53.808 "zcopy": true, 00:12:53.808 "get_zone_info": false, 00:12:53.808 "zone_management": false, 00:12:53.808 "zone_append": false, 00:12:53.808 "compare": false, 00:12:53.808 "compare_and_write": false, 00:12:53.808 "abort": true, 00:12:53.808 "seek_hole": false, 00:12:53.808 "seek_data": false, 00:12:53.808 "copy": true, 00:12:53.808 "nvme_iov_md": false 00:12:53.808 }, 00:12:53.808 "memory_domains": [ 00:12:53.808 { 00:12:53.808 "dma_device_id": "system", 00:12:53.808 "dma_device_type": 1 00:12:53.808 }, 00:12:53.808 { 00:12:53.808 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:53.808 "dma_device_type": 2 00:12:53.808 } 00:12:53.808 ], 00:12:53.808 "driver_specific": {} 00:12:53.808 } 00:12:53.808 ] 00:12:53.808 21:39:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.808 21:39:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:53.808 21:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:53.808 21:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:53.808 21:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:53.808 21:39:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.808 21:39:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:53.808 BaseBdev3 00:12:53.808 21:39:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.808 21:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:12:53.808 21:39:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:12:53.808 21:39:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:53.808 21:39:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:53.808 21:39:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:53.808 21:39:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:53.808 21:39:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:53.808 21:39:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.808 21:39:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:53.808 21:39:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.808 21:39:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:53.808 21:39:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.808 21:39:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:53.808 [ 00:12:53.808 { 00:12:53.808 "name": "BaseBdev3", 00:12:53.808 "aliases": [ 00:12:53.808 "3a59633b-3ea4-4c81-ab15-7ad2d6700945" 00:12:53.808 ], 00:12:53.808 "product_name": "Malloc disk", 00:12:53.808 "block_size": 512, 00:12:53.808 "num_blocks": 65536, 00:12:53.808 "uuid": "3a59633b-3ea4-4c81-ab15-7ad2d6700945", 00:12:53.808 "assigned_rate_limits": { 00:12:53.808 "rw_ios_per_sec": 0, 00:12:53.808 "rw_mbytes_per_sec": 0, 00:12:53.808 "r_mbytes_per_sec": 0, 00:12:53.808 "w_mbytes_per_sec": 0 00:12:53.808 }, 00:12:53.808 "claimed": false, 00:12:53.808 "zoned": false, 00:12:53.808 "supported_io_types": { 00:12:53.808 "read": true, 00:12:53.808 "write": true, 00:12:53.808 "unmap": true, 00:12:53.808 "flush": true, 00:12:53.808 "reset": true, 00:12:53.808 "nvme_admin": false, 00:12:53.808 "nvme_io": false, 00:12:53.808 "nvme_io_md": false, 00:12:53.808 "write_zeroes": true, 00:12:53.808 "zcopy": true, 00:12:53.808 "get_zone_info": false, 00:12:53.808 "zone_management": false, 00:12:53.808 "zone_append": false, 00:12:53.808 "compare": false, 00:12:53.808 "compare_and_write": false, 00:12:53.808 "abort": true, 00:12:53.808 "seek_hole": false, 00:12:53.808 "seek_data": false, 00:12:53.808 "copy": true, 00:12:53.808 "nvme_iov_md": false 00:12:53.808 }, 00:12:53.808 "memory_domains": [ 00:12:53.808 { 00:12:53.808 "dma_device_id": "system", 00:12:53.808 "dma_device_type": 1 00:12:53.808 }, 00:12:53.808 { 00:12:53.808 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:53.808 "dma_device_type": 2 00:12:53.808 } 00:12:53.808 ], 00:12:53.808 "driver_specific": {} 00:12:53.808 } 00:12:53.808 ] 00:12:53.808 21:39:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.808 21:39:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:53.808 21:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:53.808 21:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:53.808 21:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:12:53.808 21:39:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.808 21:39:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:54.068 BaseBdev4 00:12:54.068 21:39:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.068 21:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:12:54.068 21:39:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:12:54.068 21:39:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:54.068 21:39:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:54.068 21:39:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:54.068 21:39:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:54.068 21:39:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:54.068 21:39:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.068 21:39:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:54.068 21:39:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.068 21:39:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:54.068 21:39:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.068 21:39:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:54.068 [ 00:12:54.068 { 00:12:54.068 "name": "BaseBdev4", 00:12:54.068 "aliases": [ 00:12:54.068 "151ba01f-1111-49e1-bcc5-aee37831a487" 00:12:54.068 ], 00:12:54.068 "product_name": "Malloc disk", 00:12:54.068 "block_size": 512, 00:12:54.068 "num_blocks": 65536, 00:12:54.068 "uuid": "151ba01f-1111-49e1-bcc5-aee37831a487", 00:12:54.068 "assigned_rate_limits": { 00:12:54.068 "rw_ios_per_sec": 0, 00:12:54.068 "rw_mbytes_per_sec": 0, 00:12:54.068 "r_mbytes_per_sec": 0, 00:12:54.068 "w_mbytes_per_sec": 0 00:12:54.068 }, 00:12:54.068 "claimed": false, 00:12:54.068 "zoned": false, 00:12:54.068 "supported_io_types": { 00:12:54.068 "read": true, 00:12:54.068 "write": true, 00:12:54.068 "unmap": true, 00:12:54.068 "flush": true, 00:12:54.068 "reset": true, 00:12:54.068 "nvme_admin": false, 00:12:54.068 "nvme_io": false, 00:12:54.068 "nvme_io_md": false, 00:12:54.068 "write_zeroes": true, 00:12:54.068 "zcopy": true, 00:12:54.068 "get_zone_info": false, 00:12:54.068 "zone_management": false, 00:12:54.068 "zone_append": false, 00:12:54.068 "compare": false, 00:12:54.068 "compare_and_write": false, 00:12:54.068 "abort": true, 00:12:54.068 "seek_hole": false, 00:12:54.068 "seek_data": false, 00:12:54.068 "copy": true, 00:12:54.068 "nvme_iov_md": false 00:12:54.068 }, 00:12:54.068 "memory_domains": [ 00:12:54.068 { 00:12:54.068 "dma_device_id": "system", 00:12:54.068 "dma_device_type": 1 00:12:54.068 }, 00:12:54.069 { 00:12:54.069 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:54.069 "dma_device_type": 2 00:12:54.069 } 00:12:54.069 ], 00:12:54.069 "driver_specific": {} 00:12:54.069 } 00:12:54.069 ] 00:12:54.069 21:39:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.069 21:39:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:54.069 21:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:54.069 21:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:54.069 21:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:54.069 21:39:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.069 21:39:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:54.069 [2024-12-10 21:39:54.663446] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:54.069 [2024-12-10 21:39:54.663503] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:54.069 [2024-12-10 21:39:54.663534] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:54.069 [2024-12-10 21:39:54.665682] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:54.069 [2024-12-10 21:39:54.665738] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:54.069 21:39:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.069 21:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:54.069 21:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:54.069 21:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:54.069 21:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:54.069 21:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:54.069 21:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:54.069 21:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:54.069 21:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:54.069 21:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:54.069 21:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:54.069 21:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:54.069 21:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:54.069 21:39:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.069 21:39:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:54.069 21:39:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.069 21:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:54.069 "name": "Existed_Raid", 00:12:54.069 "uuid": "90728c64-9d60-42f3-8985-09f1d7bcd741", 00:12:54.069 "strip_size_kb": 64, 00:12:54.069 "state": "configuring", 00:12:54.069 "raid_level": "concat", 00:12:54.069 "superblock": true, 00:12:54.069 "num_base_bdevs": 4, 00:12:54.069 "num_base_bdevs_discovered": 3, 00:12:54.069 "num_base_bdevs_operational": 4, 00:12:54.069 "base_bdevs_list": [ 00:12:54.069 { 00:12:54.069 "name": "BaseBdev1", 00:12:54.069 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:54.069 "is_configured": false, 00:12:54.069 "data_offset": 0, 00:12:54.069 "data_size": 0 00:12:54.069 }, 00:12:54.069 { 00:12:54.069 "name": "BaseBdev2", 00:12:54.069 "uuid": "41cc5cd4-c6ea-439a-901c-11de8796441b", 00:12:54.069 "is_configured": true, 00:12:54.069 "data_offset": 2048, 00:12:54.069 "data_size": 63488 00:12:54.069 }, 00:12:54.069 { 00:12:54.069 "name": "BaseBdev3", 00:12:54.069 "uuid": "3a59633b-3ea4-4c81-ab15-7ad2d6700945", 00:12:54.069 "is_configured": true, 00:12:54.069 "data_offset": 2048, 00:12:54.069 "data_size": 63488 00:12:54.069 }, 00:12:54.069 { 00:12:54.069 "name": "BaseBdev4", 00:12:54.069 "uuid": "151ba01f-1111-49e1-bcc5-aee37831a487", 00:12:54.069 "is_configured": true, 00:12:54.069 "data_offset": 2048, 00:12:54.069 "data_size": 63488 00:12:54.069 } 00:12:54.069 ] 00:12:54.069 }' 00:12:54.069 21:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:54.069 21:39:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:54.638 21:39:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:12:54.638 21:39:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.638 21:39:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:54.638 [2024-12-10 21:39:55.182547] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:54.638 21:39:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.638 21:39:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:54.638 21:39:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:54.638 21:39:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:54.638 21:39:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:54.638 21:39:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:54.638 21:39:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:54.638 21:39:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:54.638 21:39:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:54.638 21:39:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:54.638 21:39:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:54.638 21:39:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:54.638 21:39:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:54.638 21:39:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.638 21:39:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:54.638 21:39:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.638 21:39:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:54.638 "name": "Existed_Raid", 00:12:54.638 "uuid": "90728c64-9d60-42f3-8985-09f1d7bcd741", 00:12:54.638 "strip_size_kb": 64, 00:12:54.638 "state": "configuring", 00:12:54.638 "raid_level": "concat", 00:12:54.638 "superblock": true, 00:12:54.638 "num_base_bdevs": 4, 00:12:54.638 "num_base_bdevs_discovered": 2, 00:12:54.639 "num_base_bdevs_operational": 4, 00:12:54.639 "base_bdevs_list": [ 00:12:54.639 { 00:12:54.639 "name": "BaseBdev1", 00:12:54.639 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:54.639 "is_configured": false, 00:12:54.639 "data_offset": 0, 00:12:54.639 "data_size": 0 00:12:54.639 }, 00:12:54.639 { 00:12:54.639 "name": null, 00:12:54.639 "uuid": "41cc5cd4-c6ea-439a-901c-11de8796441b", 00:12:54.639 "is_configured": false, 00:12:54.639 "data_offset": 0, 00:12:54.639 "data_size": 63488 00:12:54.639 }, 00:12:54.639 { 00:12:54.639 "name": "BaseBdev3", 00:12:54.639 "uuid": "3a59633b-3ea4-4c81-ab15-7ad2d6700945", 00:12:54.639 "is_configured": true, 00:12:54.639 "data_offset": 2048, 00:12:54.639 "data_size": 63488 00:12:54.639 }, 00:12:54.639 { 00:12:54.639 "name": "BaseBdev4", 00:12:54.639 "uuid": "151ba01f-1111-49e1-bcc5-aee37831a487", 00:12:54.639 "is_configured": true, 00:12:54.639 "data_offset": 2048, 00:12:54.639 "data_size": 63488 00:12:54.639 } 00:12:54.639 ] 00:12:54.639 }' 00:12:54.639 21:39:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:54.639 21:39:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.208 21:39:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:55.208 21:39:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:55.208 21:39:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.208 21:39:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.208 21:39:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.208 21:39:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:12:55.208 21:39:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:55.208 21:39:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.208 21:39:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.208 [2024-12-10 21:39:55.788459] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:55.208 BaseBdev1 00:12:55.208 21:39:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.208 21:39:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:12:55.208 21:39:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:12:55.208 21:39:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:55.208 21:39:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:55.208 21:39:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:55.208 21:39:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:55.208 21:39:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:55.208 21:39:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.208 21:39:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.208 21:39:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.208 21:39:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:55.208 21:39:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.208 21:39:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.208 [ 00:12:55.208 { 00:12:55.208 "name": "BaseBdev1", 00:12:55.208 "aliases": [ 00:12:55.208 "129f2229-2e49-4281-986d-d2d6f535a74e" 00:12:55.208 ], 00:12:55.208 "product_name": "Malloc disk", 00:12:55.208 "block_size": 512, 00:12:55.208 "num_blocks": 65536, 00:12:55.208 "uuid": "129f2229-2e49-4281-986d-d2d6f535a74e", 00:12:55.208 "assigned_rate_limits": { 00:12:55.208 "rw_ios_per_sec": 0, 00:12:55.208 "rw_mbytes_per_sec": 0, 00:12:55.208 "r_mbytes_per_sec": 0, 00:12:55.208 "w_mbytes_per_sec": 0 00:12:55.208 }, 00:12:55.208 "claimed": true, 00:12:55.208 "claim_type": "exclusive_write", 00:12:55.208 "zoned": false, 00:12:55.208 "supported_io_types": { 00:12:55.208 "read": true, 00:12:55.208 "write": true, 00:12:55.208 "unmap": true, 00:12:55.208 "flush": true, 00:12:55.208 "reset": true, 00:12:55.208 "nvme_admin": false, 00:12:55.208 "nvme_io": false, 00:12:55.208 "nvme_io_md": false, 00:12:55.208 "write_zeroes": true, 00:12:55.208 "zcopy": true, 00:12:55.208 "get_zone_info": false, 00:12:55.208 "zone_management": false, 00:12:55.208 "zone_append": false, 00:12:55.208 "compare": false, 00:12:55.208 "compare_and_write": false, 00:12:55.208 "abort": true, 00:12:55.208 "seek_hole": false, 00:12:55.208 "seek_data": false, 00:12:55.208 "copy": true, 00:12:55.208 "nvme_iov_md": false 00:12:55.208 }, 00:12:55.208 "memory_domains": [ 00:12:55.208 { 00:12:55.208 "dma_device_id": "system", 00:12:55.208 "dma_device_type": 1 00:12:55.208 }, 00:12:55.208 { 00:12:55.208 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:55.208 "dma_device_type": 2 00:12:55.208 } 00:12:55.208 ], 00:12:55.208 "driver_specific": {} 00:12:55.208 } 00:12:55.208 ] 00:12:55.208 21:39:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.208 21:39:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:55.208 21:39:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:55.208 21:39:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:55.208 21:39:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:55.208 21:39:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:55.208 21:39:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:55.208 21:39:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:55.208 21:39:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:55.208 21:39:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:55.208 21:39:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:55.208 21:39:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:55.208 21:39:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:55.208 21:39:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:55.208 21:39:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.208 21:39:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.208 21:39:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.208 21:39:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:55.208 "name": "Existed_Raid", 00:12:55.208 "uuid": "90728c64-9d60-42f3-8985-09f1d7bcd741", 00:12:55.208 "strip_size_kb": 64, 00:12:55.208 "state": "configuring", 00:12:55.208 "raid_level": "concat", 00:12:55.208 "superblock": true, 00:12:55.208 "num_base_bdevs": 4, 00:12:55.208 "num_base_bdevs_discovered": 3, 00:12:55.208 "num_base_bdevs_operational": 4, 00:12:55.208 "base_bdevs_list": [ 00:12:55.208 { 00:12:55.208 "name": "BaseBdev1", 00:12:55.208 "uuid": "129f2229-2e49-4281-986d-d2d6f535a74e", 00:12:55.208 "is_configured": true, 00:12:55.208 "data_offset": 2048, 00:12:55.208 "data_size": 63488 00:12:55.208 }, 00:12:55.208 { 00:12:55.208 "name": null, 00:12:55.208 "uuid": "41cc5cd4-c6ea-439a-901c-11de8796441b", 00:12:55.208 "is_configured": false, 00:12:55.208 "data_offset": 0, 00:12:55.208 "data_size": 63488 00:12:55.208 }, 00:12:55.208 { 00:12:55.208 "name": "BaseBdev3", 00:12:55.208 "uuid": "3a59633b-3ea4-4c81-ab15-7ad2d6700945", 00:12:55.208 "is_configured": true, 00:12:55.208 "data_offset": 2048, 00:12:55.208 "data_size": 63488 00:12:55.208 }, 00:12:55.208 { 00:12:55.208 "name": "BaseBdev4", 00:12:55.208 "uuid": "151ba01f-1111-49e1-bcc5-aee37831a487", 00:12:55.208 "is_configured": true, 00:12:55.208 "data_offset": 2048, 00:12:55.208 "data_size": 63488 00:12:55.208 } 00:12:55.208 ] 00:12:55.208 }' 00:12:55.208 21:39:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:55.208 21:39:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.777 21:39:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:55.777 21:39:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:55.777 21:39:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.777 21:39:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.777 21:39:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.777 21:39:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:12:55.777 21:39:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:12:55.777 21:39:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.777 21:39:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.777 [2024-12-10 21:39:56.355742] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:55.777 21:39:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.777 21:39:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:55.777 21:39:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:55.777 21:39:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:55.777 21:39:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:55.777 21:39:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:55.777 21:39:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:55.777 21:39:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:55.777 21:39:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:55.777 21:39:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:55.777 21:39:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:55.777 21:39:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:55.777 21:39:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:55.777 21:39:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.777 21:39:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.777 21:39:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.777 21:39:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:55.777 "name": "Existed_Raid", 00:12:55.777 "uuid": "90728c64-9d60-42f3-8985-09f1d7bcd741", 00:12:55.777 "strip_size_kb": 64, 00:12:55.777 "state": "configuring", 00:12:55.777 "raid_level": "concat", 00:12:55.777 "superblock": true, 00:12:55.777 "num_base_bdevs": 4, 00:12:55.778 "num_base_bdevs_discovered": 2, 00:12:55.778 "num_base_bdevs_operational": 4, 00:12:55.778 "base_bdevs_list": [ 00:12:55.778 { 00:12:55.778 "name": "BaseBdev1", 00:12:55.778 "uuid": "129f2229-2e49-4281-986d-d2d6f535a74e", 00:12:55.778 "is_configured": true, 00:12:55.778 "data_offset": 2048, 00:12:55.778 "data_size": 63488 00:12:55.778 }, 00:12:55.778 { 00:12:55.778 "name": null, 00:12:55.778 "uuid": "41cc5cd4-c6ea-439a-901c-11de8796441b", 00:12:55.778 "is_configured": false, 00:12:55.778 "data_offset": 0, 00:12:55.778 "data_size": 63488 00:12:55.778 }, 00:12:55.778 { 00:12:55.778 "name": null, 00:12:55.778 "uuid": "3a59633b-3ea4-4c81-ab15-7ad2d6700945", 00:12:55.778 "is_configured": false, 00:12:55.778 "data_offset": 0, 00:12:55.778 "data_size": 63488 00:12:55.778 }, 00:12:55.778 { 00:12:55.778 "name": "BaseBdev4", 00:12:55.778 "uuid": "151ba01f-1111-49e1-bcc5-aee37831a487", 00:12:55.778 "is_configured": true, 00:12:55.778 "data_offset": 2048, 00:12:55.778 "data_size": 63488 00:12:55.778 } 00:12:55.778 ] 00:12:55.778 }' 00:12:55.778 21:39:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:55.778 21:39:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:56.348 21:39:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:56.348 21:39:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:56.348 21:39:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.348 21:39:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:56.348 21:39:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.348 21:39:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:12:56.348 21:39:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:12:56.348 21:39:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.348 21:39:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:56.348 [2024-12-10 21:39:56.890833] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:56.348 21:39:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.348 21:39:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:56.348 21:39:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:56.348 21:39:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:56.348 21:39:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:56.348 21:39:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:56.348 21:39:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:56.348 21:39:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:56.348 21:39:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:56.348 21:39:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:56.348 21:39:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:56.348 21:39:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:56.348 21:39:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.348 21:39:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:56.348 21:39:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:56.348 21:39:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.348 21:39:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:56.348 "name": "Existed_Raid", 00:12:56.348 "uuid": "90728c64-9d60-42f3-8985-09f1d7bcd741", 00:12:56.348 "strip_size_kb": 64, 00:12:56.348 "state": "configuring", 00:12:56.348 "raid_level": "concat", 00:12:56.348 "superblock": true, 00:12:56.348 "num_base_bdevs": 4, 00:12:56.348 "num_base_bdevs_discovered": 3, 00:12:56.348 "num_base_bdevs_operational": 4, 00:12:56.348 "base_bdevs_list": [ 00:12:56.348 { 00:12:56.348 "name": "BaseBdev1", 00:12:56.348 "uuid": "129f2229-2e49-4281-986d-d2d6f535a74e", 00:12:56.348 "is_configured": true, 00:12:56.348 "data_offset": 2048, 00:12:56.348 "data_size": 63488 00:12:56.348 }, 00:12:56.348 { 00:12:56.348 "name": null, 00:12:56.348 "uuid": "41cc5cd4-c6ea-439a-901c-11de8796441b", 00:12:56.348 "is_configured": false, 00:12:56.348 "data_offset": 0, 00:12:56.348 "data_size": 63488 00:12:56.348 }, 00:12:56.348 { 00:12:56.348 "name": "BaseBdev3", 00:12:56.348 "uuid": "3a59633b-3ea4-4c81-ab15-7ad2d6700945", 00:12:56.348 "is_configured": true, 00:12:56.348 "data_offset": 2048, 00:12:56.348 "data_size": 63488 00:12:56.348 }, 00:12:56.348 { 00:12:56.348 "name": "BaseBdev4", 00:12:56.348 "uuid": "151ba01f-1111-49e1-bcc5-aee37831a487", 00:12:56.348 "is_configured": true, 00:12:56.348 "data_offset": 2048, 00:12:56.348 "data_size": 63488 00:12:56.348 } 00:12:56.348 ] 00:12:56.348 }' 00:12:56.348 21:39:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:56.348 21:39:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:56.608 21:39:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:56.608 21:39:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.608 21:39:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:56.608 21:39:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:56.608 21:39:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.868 21:39:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:12:56.868 21:39:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:56.868 21:39:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.868 21:39:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:56.868 [2024-12-10 21:39:57.406101] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:56.868 21:39:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.868 21:39:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:56.868 21:39:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:56.868 21:39:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:56.868 21:39:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:56.868 21:39:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:56.868 21:39:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:56.868 21:39:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:56.868 21:39:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:56.868 21:39:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:56.868 21:39:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:56.868 21:39:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:56.868 21:39:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:56.868 21:39:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.868 21:39:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:56.868 21:39:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.868 21:39:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:56.868 "name": "Existed_Raid", 00:12:56.868 "uuid": "90728c64-9d60-42f3-8985-09f1d7bcd741", 00:12:56.868 "strip_size_kb": 64, 00:12:56.868 "state": "configuring", 00:12:56.868 "raid_level": "concat", 00:12:56.868 "superblock": true, 00:12:56.868 "num_base_bdevs": 4, 00:12:56.868 "num_base_bdevs_discovered": 2, 00:12:56.868 "num_base_bdevs_operational": 4, 00:12:56.868 "base_bdevs_list": [ 00:12:56.868 { 00:12:56.868 "name": null, 00:12:56.868 "uuid": "129f2229-2e49-4281-986d-d2d6f535a74e", 00:12:56.868 "is_configured": false, 00:12:56.868 "data_offset": 0, 00:12:56.868 "data_size": 63488 00:12:56.868 }, 00:12:56.868 { 00:12:56.868 "name": null, 00:12:56.868 "uuid": "41cc5cd4-c6ea-439a-901c-11de8796441b", 00:12:56.868 "is_configured": false, 00:12:56.868 "data_offset": 0, 00:12:56.868 "data_size": 63488 00:12:56.868 }, 00:12:56.868 { 00:12:56.868 "name": "BaseBdev3", 00:12:56.868 "uuid": "3a59633b-3ea4-4c81-ab15-7ad2d6700945", 00:12:56.868 "is_configured": true, 00:12:56.868 "data_offset": 2048, 00:12:56.868 "data_size": 63488 00:12:56.868 }, 00:12:56.868 { 00:12:56.868 "name": "BaseBdev4", 00:12:56.868 "uuid": "151ba01f-1111-49e1-bcc5-aee37831a487", 00:12:56.868 "is_configured": true, 00:12:56.868 "data_offset": 2048, 00:12:56.868 "data_size": 63488 00:12:56.868 } 00:12:56.868 ] 00:12:56.868 }' 00:12:56.868 21:39:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:56.868 21:39:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:57.471 21:39:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:57.471 21:39:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:57.471 21:39:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.471 21:39:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:57.471 21:39:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.471 21:39:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:12:57.471 21:39:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:12:57.471 21:39:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.471 21:39:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:57.471 [2024-12-10 21:39:58.046578] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:57.471 21:39:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.471 21:39:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:57.471 21:39:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:57.471 21:39:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:57.471 21:39:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:57.471 21:39:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:57.471 21:39:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:57.471 21:39:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:57.472 21:39:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:57.472 21:39:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:57.472 21:39:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:57.472 21:39:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:57.472 21:39:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.472 21:39:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:57.472 21:39:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:57.472 21:39:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.472 21:39:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:57.472 "name": "Existed_Raid", 00:12:57.472 "uuid": "90728c64-9d60-42f3-8985-09f1d7bcd741", 00:12:57.472 "strip_size_kb": 64, 00:12:57.472 "state": "configuring", 00:12:57.472 "raid_level": "concat", 00:12:57.472 "superblock": true, 00:12:57.472 "num_base_bdevs": 4, 00:12:57.472 "num_base_bdevs_discovered": 3, 00:12:57.472 "num_base_bdevs_operational": 4, 00:12:57.472 "base_bdevs_list": [ 00:12:57.472 { 00:12:57.472 "name": null, 00:12:57.472 "uuid": "129f2229-2e49-4281-986d-d2d6f535a74e", 00:12:57.472 "is_configured": false, 00:12:57.472 "data_offset": 0, 00:12:57.472 "data_size": 63488 00:12:57.472 }, 00:12:57.472 { 00:12:57.472 "name": "BaseBdev2", 00:12:57.472 "uuid": "41cc5cd4-c6ea-439a-901c-11de8796441b", 00:12:57.472 "is_configured": true, 00:12:57.472 "data_offset": 2048, 00:12:57.472 "data_size": 63488 00:12:57.472 }, 00:12:57.472 { 00:12:57.472 "name": "BaseBdev3", 00:12:57.472 "uuid": "3a59633b-3ea4-4c81-ab15-7ad2d6700945", 00:12:57.472 "is_configured": true, 00:12:57.472 "data_offset": 2048, 00:12:57.472 "data_size": 63488 00:12:57.472 }, 00:12:57.472 { 00:12:57.472 "name": "BaseBdev4", 00:12:57.472 "uuid": "151ba01f-1111-49e1-bcc5-aee37831a487", 00:12:57.472 "is_configured": true, 00:12:57.472 "data_offset": 2048, 00:12:57.472 "data_size": 63488 00:12:57.472 } 00:12:57.472 ] 00:12:57.472 }' 00:12:57.472 21:39:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:57.472 21:39:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:58.040 21:39:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:58.040 21:39:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.040 21:39:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:58.040 21:39:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:58.040 21:39:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.040 21:39:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:12:58.040 21:39:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:58.040 21:39:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.040 21:39:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:58.040 21:39:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:12:58.040 21:39:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.040 21:39:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 129f2229-2e49-4281-986d-d2d6f535a74e 00:12:58.040 21:39:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.040 21:39:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:58.040 [2024-12-10 21:39:58.682069] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:12:58.040 [2024-12-10 21:39:58.682378] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:58.040 [2024-12-10 21:39:58.682397] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:12:58.040 [2024-12-10 21:39:58.682714] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:12:58.040 [2024-12-10 21:39:58.682885] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:58.040 [2024-12-10 21:39:58.682906] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:12:58.040 NewBaseBdev 00:12:58.040 [2024-12-10 21:39:58.683076] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:58.040 21:39:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.040 21:39:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:12:58.040 21:39:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:12:58.040 21:39:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:58.040 21:39:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:58.040 21:39:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:58.040 21:39:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:58.040 21:39:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:58.040 21:39:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.040 21:39:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:58.040 21:39:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.040 21:39:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:12:58.040 21:39:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.040 21:39:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:58.040 [ 00:12:58.040 { 00:12:58.040 "name": "NewBaseBdev", 00:12:58.040 "aliases": [ 00:12:58.040 "129f2229-2e49-4281-986d-d2d6f535a74e" 00:12:58.040 ], 00:12:58.040 "product_name": "Malloc disk", 00:12:58.040 "block_size": 512, 00:12:58.040 "num_blocks": 65536, 00:12:58.040 "uuid": "129f2229-2e49-4281-986d-d2d6f535a74e", 00:12:58.040 "assigned_rate_limits": { 00:12:58.040 "rw_ios_per_sec": 0, 00:12:58.040 "rw_mbytes_per_sec": 0, 00:12:58.040 "r_mbytes_per_sec": 0, 00:12:58.040 "w_mbytes_per_sec": 0 00:12:58.040 }, 00:12:58.040 "claimed": true, 00:12:58.040 "claim_type": "exclusive_write", 00:12:58.040 "zoned": false, 00:12:58.040 "supported_io_types": { 00:12:58.040 "read": true, 00:12:58.040 "write": true, 00:12:58.040 "unmap": true, 00:12:58.040 "flush": true, 00:12:58.040 "reset": true, 00:12:58.040 "nvme_admin": false, 00:12:58.040 "nvme_io": false, 00:12:58.040 "nvme_io_md": false, 00:12:58.040 "write_zeroes": true, 00:12:58.040 "zcopy": true, 00:12:58.040 "get_zone_info": false, 00:12:58.040 "zone_management": false, 00:12:58.040 "zone_append": false, 00:12:58.040 "compare": false, 00:12:58.040 "compare_and_write": false, 00:12:58.040 "abort": true, 00:12:58.040 "seek_hole": false, 00:12:58.040 "seek_data": false, 00:12:58.040 "copy": true, 00:12:58.040 "nvme_iov_md": false 00:12:58.040 }, 00:12:58.040 "memory_domains": [ 00:12:58.040 { 00:12:58.040 "dma_device_id": "system", 00:12:58.040 "dma_device_type": 1 00:12:58.040 }, 00:12:58.040 { 00:12:58.040 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:58.040 "dma_device_type": 2 00:12:58.040 } 00:12:58.040 ], 00:12:58.040 "driver_specific": {} 00:12:58.040 } 00:12:58.040 ] 00:12:58.040 21:39:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.040 21:39:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:58.040 21:39:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:12:58.040 21:39:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:58.040 21:39:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:58.040 21:39:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:58.040 21:39:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:58.040 21:39:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:58.040 21:39:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:58.040 21:39:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:58.040 21:39:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:58.040 21:39:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:58.040 21:39:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:58.040 21:39:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.040 21:39:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:58.040 21:39:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:58.040 21:39:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.040 21:39:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:58.040 "name": "Existed_Raid", 00:12:58.040 "uuid": "90728c64-9d60-42f3-8985-09f1d7bcd741", 00:12:58.040 "strip_size_kb": 64, 00:12:58.040 "state": "online", 00:12:58.040 "raid_level": "concat", 00:12:58.040 "superblock": true, 00:12:58.040 "num_base_bdevs": 4, 00:12:58.040 "num_base_bdevs_discovered": 4, 00:12:58.040 "num_base_bdevs_operational": 4, 00:12:58.040 "base_bdevs_list": [ 00:12:58.040 { 00:12:58.040 "name": "NewBaseBdev", 00:12:58.040 "uuid": "129f2229-2e49-4281-986d-d2d6f535a74e", 00:12:58.040 "is_configured": true, 00:12:58.040 "data_offset": 2048, 00:12:58.040 "data_size": 63488 00:12:58.040 }, 00:12:58.040 { 00:12:58.040 "name": "BaseBdev2", 00:12:58.040 "uuid": "41cc5cd4-c6ea-439a-901c-11de8796441b", 00:12:58.040 "is_configured": true, 00:12:58.040 "data_offset": 2048, 00:12:58.040 "data_size": 63488 00:12:58.040 }, 00:12:58.040 { 00:12:58.040 "name": "BaseBdev3", 00:12:58.040 "uuid": "3a59633b-3ea4-4c81-ab15-7ad2d6700945", 00:12:58.040 "is_configured": true, 00:12:58.040 "data_offset": 2048, 00:12:58.040 "data_size": 63488 00:12:58.040 }, 00:12:58.040 { 00:12:58.040 "name": "BaseBdev4", 00:12:58.040 "uuid": "151ba01f-1111-49e1-bcc5-aee37831a487", 00:12:58.040 "is_configured": true, 00:12:58.040 "data_offset": 2048, 00:12:58.040 "data_size": 63488 00:12:58.040 } 00:12:58.040 ] 00:12:58.040 }' 00:12:58.040 21:39:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:58.040 21:39:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:58.608 21:39:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:12:58.608 21:39:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:58.608 21:39:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:58.608 21:39:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:58.608 21:39:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:12:58.608 21:39:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:58.608 21:39:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:58.609 21:39:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:58.609 21:39:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.609 21:39:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:58.609 [2024-12-10 21:39:59.221696] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:58.609 21:39:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.609 21:39:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:58.609 "name": "Existed_Raid", 00:12:58.609 "aliases": [ 00:12:58.609 "90728c64-9d60-42f3-8985-09f1d7bcd741" 00:12:58.609 ], 00:12:58.609 "product_name": "Raid Volume", 00:12:58.609 "block_size": 512, 00:12:58.609 "num_blocks": 253952, 00:12:58.609 "uuid": "90728c64-9d60-42f3-8985-09f1d7bcd741", 00:12:58.609 "assigned_rate_limits": { 00:12:58.609 "rw_ios_per_sec": 0, 00:12:58.609 "rw_mbytes_per_sec": 0, 00:12:58.609 "r_mbytes_per_sec": 0, 00:12:58.609 "w_mbytes_per_sec": 0 00:12:58.609 }, 00:12:58.609 "claimed": false, 00:12:58.609 "zoned": false, 00:12:58.609 "supported_io_types": { 00:12:58.609 "read": true, 00:12:58.609 "write": true, 00:12:58.609 "unmap": true, 00:12:58.609 "flush": true, 00:12:58.609 "reset": true, 00:12:58.609 "nvme_admin": false, 00:12:58.609 "nvme_io": false, 00:12:58.609 "nvme_io_md": false, 00:12:58.609 "write_zeroes": true, 00:12:58.609 "zcopy": false, 00:12:58.609 "get_zone_info": false, 00:12:58.609 "zone_management": false, 00:12:58.609 "zone_append": false, 00:12:58.609 "compare": false, 00:12:58.609 "compare_and_write": false, 00:12:58.609 "abort": false, 00:12:58.609 "seek_hole": false, 00:12:58.609 "seek_data": false, 00:12:58.609 "copy": false, 00:12:58.609 "nvme_iov_md": false 00:12:58.609 }, 00:12:58.609 "memory_domains": [ 00:12:58.609 { 00:12:58.609 "dma_device_id": "system", 00:12:58.609 "dma_device_type": 1 00:12:58.609 }, 00:12:58.609 { 00:12:58.609 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:58.609 "dma_device_type": 2 00:12:58.609 }, 00:12:58.609 { 00:12:58.609 "dma_device_id": "system", 00:12:58.609 "dma_device_type": 1 00:12:58.609 }, 00:12:58.609 { 00:12:58.609 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:58.609 "dma_device_type": 2 00:12:58.609 }, 00:12:58.609 { 00:12:58.609 "dma_device_id": "system", 00:12:58.609 "dma_device_type": 1 00:12:58.609 }, 00:12:58.609 { 00:12:58.609 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:58.609 "dma_device_type": 2 00:12:58.609 }, 00:12:58.609 { 00:12:58.609 "dma_device_id": "system", 00:12:58.609 "dma_device_type": 1 00:12:58.609 }, 00:12:58.609 { 00:12:58.609 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:58.609 "dma_device_type": 2 00:12:58.609 } 00:12:58.609 ], 00:12:58.609 "driver_specific": { 00:12:58.609 "raid": { 00:12:58.609 "uuid": "90728c64-9d60-42f3-8985-09f1d7bcd741", 00:12:58.609 "strip_size_kb": 64, 00:12:58.609 "state": "online", 00:12:58.609 "raid_level": "concat", 00:12:58.609 "superblock": true, 00:12:58.609 "num_base_bdevs": 4, 00:12:58.609 "num_base_bdevs_discovered": 4, 00:12:58.609 "num_base_bdevs_operational": 4, 00:12:58.609 "base_bdevs_list": [ 00:12:58.609 { 00:12:58.609 "name": "NewBaseBdev", 00:12:58.609 "uuid": "129f2229-2e49-4281-986d-d2d6f535a74e", 00:12:58.609 "is_configured": true, 00:12:58.609 "data_offset": 2048, 00:12:58.609 "data_size": 63488 00:12:58.609 }, 00:12:58.609 { 00:12:58.609 "name": "BaseBdev2", 00:12:58.609 "uuid": "41cc5cd4-c6ea-439a-901c-11de8796441b", 00:12:58.609 "is_configured": true, 00:12:58.609 "data_offset": 2048, 00:12:58.609 "data_size": 63488 00:12:58.609 }, 00:12:58.609 { 00:12:58.609 "name": "BaseBdev3", 00:12:58.609 "uuid": "3a59633b-3ea4-4c81-ab15-7ad2d6700945", 00:12:58.609 "is_configured": true, 00:12:58.609 "data_offset": 2048, 00:12:58.609 "data_size": 63488 00:12:58.609 }, 00:12:58.609 { 00:12:58.609 "name": "BaseBdev4", 00:12:58.609 "uuid": "151ba01f-1111-49e1-bcc5-aee37831a487", 00:12:58.609 "is_configured": true, 00:12:58.609 "data_offset": 2048, 00:12:58.609 "data_size": 63488 00:12:58.609 } 00:12:58.609 ] 00:12:58.609 } 00:12:58.609 } 00:12:58.609 }' 00:12:58.609 21:39:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:58.609 21:39:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:12:58.609 BaseBdev2 00:12:58.609 BaseBdev3 00:12:58.609 BaseBdev4' 00:12:58.609 21:39:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:58.609 21:39:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:58.609 21:39:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:58.609 21:39:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:12:58.609 21:39:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.609 21:39:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:58.609 21:39:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:58.609 21:39:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.867 21:39:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:58.867 21:39:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:58.867 21:39:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:58.867 21:39:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:58.867 21:39:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:58.867 21:39:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.867 21:39:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:58.867 21:39:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.867 21:39:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:58.867 21:39:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:58.867 21:39:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:58.867 21:39:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:58.867 21:39:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:58.867 21:39:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.867 21:39:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:58.867 21:39:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.867 21:39:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:58.867 21:39:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:58.867 21:39:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:58.867 21:39:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:12:58.867 21:39:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.867 21:39:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:58.867 21:39:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:58.867 21:39:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.867 21:39:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:58.867 21:39:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:58.867 21:39:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:58.867 21:39:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.867 21:39:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:58.867 [2024-12-10 21:39:59.548726] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:58.867 [2024-12-10 21:39:59.548765] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:58.867 [2024-12-10 21:39:59.548860] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:58.867 [2024-12-10 21:39:59.548940] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:58.867 [2024-12-10 21:39:59.548952] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:12:58.867 21:39:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.867 21:39:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 72085 00:12:58.867 21:39:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 72085 ']' 00:12:58.867 21:39:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 72085 00:12:58.867 21:39:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:12:58.867 21:39:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:58.867 21:39:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72085 00:12:58.867 21:39:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:58.867 21:39:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:58.867 killing process with pid 72085 00:12:58.867 21:39:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72085' 00:12:58.867 21:39:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 72085 00:12:58.867 [2024-12-10 21:39:59.588567] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:58.867 21:39:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 72085 00:12:59.435 [2024-12-10 21:40:00.078377] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:00.815 21:40:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:13:00.815 00:13:00.815 real 0m12.683s 00:13:00.815 user 0m20.071s 00:13:00.815 sys 0m2.217s 00:13:00.815 21:40:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:00.815 21:40:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:00.815 ************************************ 00:13:00.815 END TEST raid_state_function_test_sb 00:13:00.815 ************************************ 00:13:00.815 21:40:01 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 4 00:13:00.815 21:40:01 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:13:00.815 21:40:01 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:00.815 21:40:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:00.815 ************************************ 00:13:00.815 START TEST raid_superblock_test 00:13:00.815 ************************************ 00:13:00.815 21:40:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 4 00:13:00.815 21:40:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:13:00.815 21:40:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:13:00.815 21:40:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:13:00.815 21:40:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:13:00.815 21:40:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:13:00.815 21:40:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:13:00.815 21:40:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:13:00.815 21:40:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:13:00.815 21:40:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:13:00.815 21:40:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:13:00.815 21:40:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:13:00.815 21:40:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:13:00.815 21:40:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:13:00.815 21:40:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:13:00.815 21:40:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:13:00.815 21:40:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:13:00.815 21:40:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=72768 00:13:00.815 21:40:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:13:00.815 21:40:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 72768 00:13:00.815 21:40:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 72768 ']' 00:13:00.815 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:00.815 21:40:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:00.815 21:40:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:00.815 21:40:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:00.815 21:40:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:00.815 21:40:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.815 [2024-12-10 21:40:01.593646] Starting SPDK v25.01-pre git sha1 cec5ba284 / DPDK 24.03.0 initialization... 00:13:01.081 [2024-12-10 21:40:01.594373] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72768 ] 00:13:01.081 [2024-12-10 21:40:01.754761] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:01.339 [2024-12-10 21:40:01.890121] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:13:01.598 [2024-12-10 21:40:02.135454] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:01.598 [2024-12-10 21:40:02.135530] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:01.858 21:40:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:01.858 21:40:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:13:01.858 21:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:13:01.858 21:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:01.858 21:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:13:01.858 21:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:13:01.858 21:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:13:01.858 21:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:01.858 21:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:01.858 21:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:01.858 21:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:13:01.858 21:40:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.858 21:40:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.858 malloc1 00:13:01.858 21:40:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.858 21:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:01.858 21:40:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.858 21:40:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.858 [2024-12-10 21:40:02.551699] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:01.858 [2024-12-10 21:40:02.551834] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:01.858 [2024-12-10 21:40:02.551887] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:01.858 [2024-12-10 21:40:02.551958] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:01.858 [2024-12-10 21:40:02.554528] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:01.858 [2024-12-10 21:40:02.554603] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:01.858 pt1 00:13:01.858 21:40:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.858 21:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:01.858 21:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:01.858 21:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:13:01.858 21:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:13:01.858 21:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:13:01.858 21:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:01.858 21:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:01.858 21:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:01.858 21:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:13:01.858 21:40:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.858 21:40:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.858 malloc2 00:13:01.858 21:40:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.858 21:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:01.858 21:40:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.858 21:40:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.858 [2024-12-10 21:40:02.617965] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:01.858 [2024-12-10 21:40:02.618034] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:01.858 [2024-12-10 21:40:02.618059] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:01.858 [2024-12-10 21:40:02.618069] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:01.858 [2024-12-10 21:40:02.620591] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:01.858 [2024-12-10 21:40:02.620713] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:01.858 pt2 00:13:01.858 21:40:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.858 21:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:01.858 21:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:01.858 21:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:13:01.858 21:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:13:01.858 21:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:13:01.858 21:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:01.858 21:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:01.858 21:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:01.858 21:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:13:01.858 21:40:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.858 21:40:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.151 malloc3 00:13:02.151 21:40:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.151 21:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:02.151 21:40:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.151 21:40:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.151 [2024-12-10 21:40:02.697090] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:02.151 [2024-12-10 21:40:02.697223] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:02.151 [2024-12-10 21:40:02.697273] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:13:02.151 [2024-12-10 21:40:02.697318] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:02.151 [2024-12-10 21:40:02.699802] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:02.151 [2024-12-10 21:40:02.699887] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:02.151 pt3 00:13:02.151 21:40:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.151 21:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:02.151 21:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:02.151 21:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:13:02.151 21:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:13:02.151 21:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:13:02.151 21:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:02.151 21:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:02.151 21:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:02.151 21:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:13:02.151 21:40:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.151 21:40:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.151 malloc4 00:13:02.151 21:40:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.151 21:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:13:02.151 21:40:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.151 21:40:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.151 [2024-12-10 21:40:02.763614] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:13:02.151 [2024-12-10 21:40:02.763746] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:02.151 [2024-12-10 21:40:02.763797] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:13:02.151 [2024-12-10 21:40:02.763832] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:02.151 [2024-12-10 21:40:02.766283] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:02.151 [2024-12-10 21:40:02.766362] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:13:02.151 pt4 00:13:02.151 21:40:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.151 21:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:02.151 21:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:02.151 21:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:13:02.151 21:40:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.151 21:40:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.151 [2024-12-10 21:40:02.775620] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:02.151 [2024-12-10 21:40:02.777724] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:02.151 [2024-12-10 21:40:02.777818] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:02.151 [2024-12-10 21:40:02.777873] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:13:02.151 [2024-12-10 21:40:02.778067] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:02.151 [2024-12-10 21:40:02.778080] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:13:02.151 [2024-12-10 21:40:02.778367] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:13:02.151 [2024-12-10 21:40:02.778572] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:02.151 [2024-12-10 21:40:02.778588] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:02.151 [2024-12-10 21:40:02.778754] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:02.151 21:40:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.151 21:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:13:02.151 21:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:02.151 21:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:02.151 21:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:02.151 21:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:02.151 21:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:02.151 21:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:02.151 21:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:02.151 21:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:02.151 21:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:02.151 21:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:02.151 21:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:02.151 21:40:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.151 21:40:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.151 21:40:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.151 21:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:02.151 "name": "raid_bdev1", 00:13:02.151 "uuid": "84a0a9a6-14cc-44ee-a1eb-80444fc78872", 00:13:02.151 "strip_size_kb": 64, 00:13:02.151 "state": "online", 00:13:02.151 "raid_level": "concat", 00:13:02.151 "superblock": true, 00:13:02.151 "num_base_bdevs": 4, 00:13:02.151 "num_base_bdevs_discovered": 4, 00:13:02.151 "num_base_bdevs_operational": 4, 00:13:02.151 "base_bdevs_list": [ 00:13:02.151 { 00:13:02.151 "name": "pt1", 00:13:02.151 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:02.151 "is_configured": true, 00:13:02.151 "data_offset": 2048, 00:13:02.151 "data_size": 63488 00:13:02.151 }, 00:13:02.151 { 00:13:02.151 "name": "pt2", 00:13:02.151 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:02.151 "is_configured": true, 00:13:02.151 "data_offset": 2048, 00:13:02.151 "data_size": 63488 00:13:02.151 }, 00:13:02.151 { 00:13:02.151 "name": "pt3", 00:13:02.151 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:02.151 "is_configured": true, 00:13:02.151 "data_offset": 2048, 00:13:02.151 "data_size": 63488 00:13:02.151 }, 00:13:02.151 { 00:13:02.151 "name": "pt4", 00:13:02.151 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:02.152 "is_configured": true, 00:13:02.152 "data_offset": 2048, 00:13:02.152 "data_size": 63488 00:13:02.152 } 00:13:02.152 ] 00:13:02.152 }' 00:13:02.152 21:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:02.152 21:40:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.730 21:40:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:13:02.730 21:40:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:13:02.730 21:40:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:02.730 21:40:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:02.730 21:40:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:02.730 21:40:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:02.730 21:40:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:02.730 21:40:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:02.730 21:40:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.730 21:40:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.730 [2024-12-10 21:40:03.287173] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:02.730 21:40:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.730 21:40:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:02.730 "name": "raid_bdev1", 00:13:02.730 "aliases": [ 00:13:02.730 "84a0a9a6-14cc-44ee-a1eb-80444fc78872" 00:13:02.730 ], 00:13:02.730 "product_name": "Raid Volume", 00:13:02.730 "block_size": 512, 00:13:02.730 "num_blocks": 253952, 00:13:02.730 "uuid": "84a0a9a6-14cc-44ee-a1eb-80444fc78872", 00:13:02.730 "assigned_rate_limits": { 00:13:02.730 "rw_ios_per_sec": 0, 00:13:02.730 "rw_mbytes_per_sec": 0, 00:13:02.730 "r_mbytes_per_sec": 0, 00:13:02.730 "w_mbytes_per_sec": 0 00:13:02.730 }, 00:13:02.730 "claimed": false, 00:13:02.730 "zoned": false, 00:13:02.730 "supported_io_types": { 00:13:02.730 "read": true, 00:13:02.730 "write": true, 00:13:02.730 "unmap": true, 00:13:02.730 "flush": true, 00:13:02.730 "reset": true, 00:13:02.730 "nvme_admin": false, 00:13:02.730 "nvme_io": false, 00:13:02.730 "nvme_io_md": false, 00:13:02.730 "write_zeroes": true, 00:13:02.730 "zcopy": false, 00:13:02.730 "get_zone_info": false, 00:13:02.730 "zone_management": false, 00:13:02.730 "zone_append": false, 00:13:02.730 "compare": false, 00:13:02.730 "compare_and_write": false, 00:13:02.730 "abort": false, 00:13:02.730 "seek_hole": false, 00:13:02.730 "seek_data": false, 00:13:02.730 "copy": false, 00:13:02.730 "nvme_iov_md": false 00:13:02.730 }, 00:13:02.730 "memory_domains": [ 00:13:02.730 { 00:13:02.730 "dma_device_id": "system", 00:13:02.730 "dma_device_type": 1 00:13:02.730 }, 00:13:02.730 { 00:13:02.730 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:02.730 "dma_device_type": 2 00:13:02.730 }, 00:13:02.730 { 00:13:02.730 "dma_device_id": "system", 00:13:02.730 "dma_device_type": 1 00:13:02.730 }, 00:13:02.730 { 00:13:02.730 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:02.730 "dma_device_type": 2 00:13:02.730 }, 00:13:02.730 { 00:13:02.730 "dma_device_id": "system", 00:13:02.730 "dma_device_type": 1 00:13:02.730 }, 00:13:02.730 { 00:13:02.730 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:02.730 "dma_device_type": 2 00:13:02.730 }, 00:13:02.730 { 00:13:02.730 "dma_device_id": "system", 00:13:02.730 "dma_device_type": 1 00:13:02.730 }, 00:13:02.730 { 00:13:02.730 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:02.730 "dma_device_type": 2 00:13:02.730 } 00:13:02.730 ], 00:13:02.730 "driver_specific": { 00:13:02.730 "raid": { 00:13:02.730 "uuid": "84a0a9a6-14cc-44ee-a1eb-80444fc78872", 00:13:02.730 "strip_size_kb": 64, 00:13:02.730 "state": "online", 00:13:02.730 "raid_level": "concat", 00:13:02.730 "superblock": true, 00:13:02.730 "num_base_bdevs": 4, 00:13:02.730 "num_base_bdevs_discovered": 4, 00:13:02.730 "num_base_bdevs_operational": 4, 00:13:02.730 "base_bdevs_list": [ 00:13:02.730 { 00:13:02.730 "name": "pt1", 00:13:02.730 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:02.730 "is_configured": true, 00:13:02.730 "data_offset": 2048, 00:13:02.730 "data_size": 63488 00:13:02.730 }, 00:13:02.730 { 00:13:02.730 "name": "pt2", 00:13:02.730 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:02.730 "is_configured": true, 00:13:02.730 "data_offset": 2048, 00:13:02.730 "data_size": 63488 00:13:02.730 }, 00:13:02.730 { 00:13:02.730 "name": "pt3", 00:13:02.730 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:02.730 "is_configured": true, 00:13:02.730 "data_offset": 2048, 00:13:02.730 "data_size": 63488 00:13:02.730 }, 00:13:02.730 { 00:13:02.730 "name": "pt4", 00:13:02.730 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:02.730 "is_configured": true, 00:13:02.730 "data_offset": 2048, 00:13:02.730 "data_size": 63488 00:13:02.730 } 00:13:02.730 ] 00:13:02.730 } 00:13:02.730 } 00:13:02.730 }' 00:13:02.730 21:40:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:02.730 21:40:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:13:02.730 pt2 00:13:02.730 pt3 00:13:02.730 pt4' 00:13:02.730 21:40:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:02.730 21:40:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:02.730 21:40:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:02.730 21:40:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:13:02.730 21:40:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:02.730 21:40:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.730 21:40:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.730 21:40:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.730 21:40:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:02.730 21:40:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:02.731 21:40:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:02.731 21:40:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:02.731 21:40:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:13:02.731 21:40:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.731 21:40:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.731 21:40:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.991 21:40:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:02.991 21:40:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:02.991 21:40:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:02.991 21:40:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:02.991 21:40:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:13:02.991 21:40:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.991 21:40:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.991 21:40:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.991 21:40:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:02.991 21:40:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:02.991 21:40:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:02.991 21:40:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:13:02.991 21:40:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:02.991 21:40:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.991 21:40:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.991 21:40:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.991 21:40:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:02.991 21:40:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:02.991 21:40:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:02.991 21:40:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:13:02.991 21:40:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.991 21:40:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.991 [2024-12-10 21:40:03.622625] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:02.991 21:40:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.991 21:40:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=84a0a9a6-14cc-44ee-a1eb-80444fc78872 00:13:02.991 21:40:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 84a0a9a6-14cc-44ee-a1eb-80444fc78872 ']' 00:13:02.991 21:40:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:02.991 21:40:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.991 21:40:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.991 [2024-12-10 21:40:03.670153] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:02.991 [2024-12-10 21:40:03.670233] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:02.991 [2024-12-10 21:40:03.670341] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:02.991 [2024-12-10 21:40:03.670439] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:02.991 [2024-12-10 21:40:03.670457] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:02.991 21:40:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.991 21:40:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:02.991 21:40:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.991 21:40:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.991 21:40:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:13:02.991 21:40:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.991 21:40:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:13:02.991 21:40:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:13:02.991 21:40:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:02.991 21:40:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:13:02.991 21:40:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.991 21:40:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.991 21:40:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.991 21:40:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:02.991 21:40:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:13:02.991 21:40:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.991 21:40:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.991 21:40:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.991 21:40:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:02.991 21:40:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:13:02.991 21:40:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.991 21:40:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.991 21:40:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.991 21:40:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:02.991 21:40:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:13:02.991 21:40:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.991 21:40:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.991 21:40:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.991 21:40:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:13:02.991 21:40:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.991 21:40:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.251 21:40:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:13:03.251 21:40:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.251 21:40:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:13:03.251 21:40:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:13:03.251 21:40:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:13:03.251 21:40:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:13:03.251 21:40:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:13:03.251 21:40:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:03.251 21:40:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:13:03.251 21:40:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:03.251 21:40:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:13:03.251 21:40:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.252 21:40:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.252 [2024-12-10 21:40:03.833929] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:13:03.252 [2024-12-10 21:40:03.836124] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:13:03.252 [2024-12-10 21:40:03.836239] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:13:03.252 [2024-12-10 21:40:03.836314] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:13:03.252 [2024-12-10 21:40:03.836408] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:13:03.252 [2024-12-10 21:40:03.836528] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:13:03.252 [2024-12-10 21:40:03.836592] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:13:03.252 [2024-12-10 21:40:03.836656] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:13:03.252 [2024-12-10 21:40:03.836710] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:03.252 [2024-12-10 21:40:03.836749] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:13:03.252 request: 00:13:03.252 { 00:13:03.252 "name": "raid_bdev1", 00:13:03.252 "raid_level": "concat", 00:13:03.252 "base_bdevs": [ 00:13:03.252 "malloc1", 00:13:03.252 "malloc2", 00:13:03.252 "malloc3", 00:13:03.252 "malloc4" 00:13:03.252 ], 00:13:03.252 "strip_size_kb": 64, 00:13:03.252 "superblock": false, 00:13:03.252 "method": "bdev_raid_create", 00:13:03.252 "req_id": 1 00:13:03.252 } 00:13:03.252 Got JSON-RPC error response 00:13:03.252 response: 00:13:03.252 { 00:13:03.252 "code": -17, 00:13:03.252 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:13:03.252 } 00:13:03.252 21:40:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:13:03.252 21:40:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:13:03.252 21:40:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:03.252 21:40:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:03.252 21:40:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:03.252 21:40:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:03.252 21:40:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:13:03.252 21:40:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.252 21:40:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.252 21:40:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.252 21:40:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:13:03.252 21:40:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:13:03.252 21:40:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:03.252 21:40:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.252 21:40:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.252 [2024-12-10 21:40:03.897768] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:03.252 [2024-12-10 21:40:03.897912] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:03.252 [2024-12-10 21:40:03.897980] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:13:03.252 [2024-12-10 21:40:03.898029] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:03.252 [2024-12-10 21:40:03.900610] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:03.252 [2024-12-10 21:40:03.900709] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:03.252 [2024-12-10 21:40:03.900845] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:13:03.252 [2024-12-10 21:40:03.900949] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:03.252 pt1 00:13:03.252 21:40:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.252 21:40:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:13:03.252 21:40:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:03.252 21:40:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:03.252 21:40:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:03.252 21:40:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:03.252 21:40:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:03.252 21:40:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:03.252 21:40:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:03.252 21:40:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:03.252 21:40:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:03.252 21:40:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:03.252 21:40:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.252 21:40:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.252 21:40:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:03.252 21:40:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.252 21:40:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:03.252 "name": "raid_bdev1", 00:13:03.252 "uuid": "84a0a9a6-14cc-44ee-a1eb-80444fc78872", 00:13:03.252 "strip_size_kb": 64, 00:13:03.252 "state": "configuring", 00:13:03.252 "raid_level": "concat", 00:13:03.252 "superblock": true, 00:13:03.252 "num_base_bdevs": 4, 00:13:03.252 "num_base_bdevs_discovered": 1, 00:13:03.252 "num_base_bdevs_operational": 4, 00:13:03.252 "base_bdevs_list": [ 00:13:03.252 { 00:13:03.252 "name": "pt1", 00:13:03.252 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:03.252 "is_configured": true, 00:13:03.252 "data_offset": 2048, 00:13:03.252 "data_size": 63488 00:13:03.252 }, 00:13:03.252 { 00:13:03.252 "name": null, 00:13:03.252 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:03.252 "is_configured": false, 00:13:03.252 "data_offset": 2048, 00:13:03.252 "data_size": 63488 00:13:03.252 }, 00:13:03.252 { 00:13:03.252 "name": null, 00:13:03.252 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:03.252 "is_configured": false, 00:13:03.252 "data_offset": 2048, 00:13:03.252 "data_size": 63488 00:13:03.252 }, 00:13:03.252 { 00:13:03.252 "name": null, 00:13:03.252 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:03.252 "is_configured": false, 00:13:03.252 "data_offset": 2048, 00:13:03.252 "data_size": 63488 00:13:03.252 } 00:13:03.252 ] 00:13:03.252 }' 00:13:03.252 21:40:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:03.252 21:40:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.821 21:40:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:13:03.821 21:40:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:03.821 21:40:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.821 21:40:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.821 [2024-12-10 21:40:04.432887] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:03.821 [2024-12-10 21:40:04.432978] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:03.821 [2024-12-10 21:40:04.433000] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:13:03.821 [2024-12-10 21:40:04.433014] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:03.821 [2024-12-10 21:40:04.433537] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:03.821 [2024-12-10 21:40:04.433562] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:03.821 [2024-12-10 21:40:04.433662] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:03.821 [2024-12-10 21:40:04.433690] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:03.821 pt2 00:13:03.821 21:40:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.821 21:40:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:13:03.821 21:40:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.821 21:40:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.821 [2024-12-10 21:40:04.444897] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:13:03.821 21:40:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.821 21:40:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:13:03.821 21:40:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:03.821 21:40:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:03.821 21:40:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:03.821 21:40:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:03.821 21:40:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:03.821 21:40:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:03.821 21:40:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:03.821 21:40:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:03.821 21:40:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:03.821 21:40:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:03.821 21:40:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:03.821 21:40:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.821 21:40:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.821 21:40:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.821 21:40:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:03.821 "name": "raid_bdev1", 00:13:03.821 "uuid": "84a0a9a6-14cc-44ee-a1eb-80444fc78872", 00:13:03.821 "strip_size_kb": 64, 00:13:03.821 "state": "configuring", 00:13:03.821 "raid_level": "concat", 00:13:03.821 "superblock": true, 00:13:03.821 "num_base_bdevs": 4, 00:13:03.821 "num_base_bdevs_discovered": 1, 00:13:03.821 "num_base_bdevs_operational": 4, 00:13:03.821 "base_bdevs_list": [ 00:13:03.821 { 00:13:03.821 "name": "pt1", 00:13:03.821 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:03.821 "is_configured": true, 00:13:03.821 "data_offset": 2048, 00:13:03.821 "data_size": 63488 00:13:03.821 }, 00:13:03.821 { 00:13:03.821 "name": null, 00:13:03.821 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:03.821 "is_configured": false, 00:13:03.821 "data_offset": 0, 00:13:03.821 "data_size": 63488 00:13:03.821 }, 00:13:03.821 { 00:13:03.821 "name": null, 00:13:03.821 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:03.821 "is_configured": false, 00:13:03.821 "data_offset": 2048, 00:13:03.821 "data_size": 63488 00:13:03.821 }, 00:13:03.821 { 00:13:03.821 "name": null, 00:13:03.821 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:03.821 "is_configured": false, 00:13:03.821 "data_offset": 2048, 00:13:03.821 "data_size": 63488 00:13:03.821 } 00:13:03.821 ] 00:13:03.821 }' 00:13:03.821 21:40:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:03.821 21:40:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.393 21:40:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:13:04.393 21:40:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:04.393 21:40:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:04.393 21:40:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.393 21:40:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.393 [2024-12-10 21:40:04.884924] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:04.393 [2024-12-10 21:40:04.885059] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:04.393 [2024-12-10 21:40:04.885118] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:13:04.393 [2024-12-10 21:40:04.885154] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:04.393 [2024-12-10 21:40:04.885713] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:04.393 [2024-12-10 21:40:04.885778] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:04.393 [2024-12-10 21:40:04.885913] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:04.393 [2024-12-10 21:40:04.885970] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:04.393 pt2 00:13:04.393 21:40:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.393 21:40:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:13:04.393 21:40:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:04.393 21:40:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:04.393 21:40:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.393 21:40:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.393 [2024-12-10 21:40:04.896881] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:04.393 [2024-12-10 21:40:04.896986] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:04.393 [2024-12-10 21:40:04.897039] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:13:04.393 [2024-12-10 21:40:04.897052] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:04.393 [2024-12-10 21:40:04.897556] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:04.393 [2024-12-10 21:40:04.897585] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:04.393 [2024-12-10 21:40:04.897675] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:13:04.393 [2024-12-10 21:40:04.897708] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:04.393 pt3 00:13:04.393 21:40:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.393 21:40:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:13:04.393 21:40:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:04.393 21:40:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:13:04.393 21:40:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.393 21:40:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.393 [2024-12-10 21:40:04.912860] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:13:04.393 [2024-12-10 21:40:04.912922] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:04.393 [2024-12-10 21:40:04.912946] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:13:04.393 [2024-12-10 21:40:04.912957] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:04.393 [2024-12-10 21:40:04.913469] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:04.393 [2024-12-10 21:40:04.913491] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:13:04.393 [2024-12-10 21:40:04.913584] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:13:04.393 [2024-12-10 21:40:04.913612] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:13:04.393 [2024-12-10 21:40:04.913794] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:13:04.393 [2024-12-10 21:40:04.913804] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:13:04.393 [2024-12-10 21:40:04.914077] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:13:04.393 [2024-12-10 21:40:04.914268] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:13:04.393 [2024-12-10 21:40:04.914283] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:13:04.393 [2024-12-10 21:40:04.914463] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:04.393 pt4 00:13:04.393 21:40:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.393 21:40:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:13:04.393 21:40:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:04.393 21:40:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:13:04.393 21:40:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:04.393 21:40:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:04.393 21:40:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:04.393 21:40:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:04.393 21:40:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:04.393 21:40:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:04.393 21:40:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:04.393 21:40:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:04.393 21:40:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:04.393 21:40:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:04.393 21:40:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:04.393 21:40:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.393 21:40:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.393 21:40:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.393 21:40:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:04.393 "name": "raid_bdev1", 00:13:04.393 "uuid": "84a0a9a6-14cc-44ee-a1eb-80444fc78872", 00:13:04.393 "strip_size_kb": 64, 00:13:04.393 "state": "online", 00:13:04.393 "raid_level": "concat", 00:13:04.393 "superblock": true, 00:13:04.393 "num_base_bdevs": 4, 00:13:04.393 "num_base_bdevs_discovered": 4, 00:13:04.393 "num_base_bdevs_operational": 4, 00:13:04.393 "base_bdevs_list": [ 00:13:04.393 { 00:13:04.393 "name": "pt1", 00:13:04.393 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:04.393 "is_configured": true, 00:13:04.393 "data_offset": 2048, 00:13:04.393 "data_size": 63488 00:13:04.393 }, 00:13:04.393 { 00:13:04.393 "name": "pt2", 00:13:04.393 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:04.393 "is_configured": true, 00:13:04.393 "data_offset": 2048, 00:13:04.393 "data_size": 63488 00:13:04.393 }, 00:13:04.393 { 00:13:04.393 "name": "pt3", 00:13:04.393 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:04.393 "is_configured": true, 00:13:04.393 "data_offset": 2048, 00:13:04.393 "data_size": 63488 00:13:04.393 }, 00:13:04.393 { 00:13:04.393 "name": "pt4", 00:13:04.393 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:04.393 "is_configured": true, 00:13:04.393 "data_offset": 2048, 00:13:04.393 "data_size": 63488 00:13:04.393 } 00:13:04.393 ] 00:13:04.393 }' 00:13:04.393 21:40:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:04.393 21:40:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.665 21:40:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:13:04.665 21:40:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:13:04.665 21:40:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:04.665 21:40:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:04.665 21:40:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:04.665 21:40:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:04.665 21:40:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:04.665 21:40:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:04.665 21:40:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.665 21:40:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.665 [2024-12-10 21:40:05.392545] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:04.665 21:40:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.665 21:40:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:04.665 "name": "raid_bdev1", 00:13:04.665 "aliases": [ 00:13:04.665 "84a0a9a6-14cc-44ee-a1eb-80444fc78872" 00:13:04.665 ], 00:13:04.665 "product_name": "Raid Volume", 00:13:04.665 "block_size": 512, 00:13:04.665 "num_blocks": 253952, 00:13:04.665 "uuid": "84a0a9a6-14cc-44ee-a1eb-80444fc78872", 00:13:04.665 "assigned_rate_limits": { 00:13:04.665 "rw_ios_per_sec": 0, 00:13:04.665 "rw_mbytes_per_sec": 0, 00:13:04.665 "r_mbytes_per_sec": 0, 00:13:04.665 "w_mbytes_per_sec": 0 00:13:04.665 }, 00:13:04.665 "claimed": false, 00:13:04.665 "zoned": false, 00:13:04.665 "supported_io_types": { 00:13:04.665 "read": true, 00:13:04.665 "write": true, 00:13:04.665 "unmap": true, 00:13:04.665 "flush": true, 00:13:04.665 "reset": true, 00:13:04.665 "nvme_admin": false, 00:13:04.665 "nvme_io": false, 00:13:04.665 "nvme_io_md": false, 00:13:04.665 "write_zeroes": true, 00:13:04.665 "zcopy": false, 00:13:04.665 "get_zone_info": false, 00:13:04.665 "zone_management": false, 00:13:04.665 "zone_append": false, 00:13:04.665 "compare": false, 00:13:04.665 "compare_and_write": false, 00:13:04.665 "abort": false, 00:13:04.665 "seek_hole": false, 00:13:04.665 "seek_data": false, 00:13:04.665 "copy": false, 00:13:04.665 "nvme_iov_md": false 00:13:04.665 }, 00:13:04.665 "memory_domains": [ 00:13:04.665 { 00:13:04.665 "dma_device_id": "system", 00:13:04.665 "dma_device_type": 1 00:13:04.665 }, 00:13:04.665 { 00:13:04.665 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:04.665 "dma_device_type": 2 00:13:04.665 }, 00:13:04.665 { 00:13:04.665 "dma_device_id": "system", 00:13:04.665 "dma_device_type": 1 00:13:04.665 }, 00:13:04.665 { 00:13:04.665 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:04.665 "dma_device_type": 2 00:13:04.665 }, 00:13:04.665 { 00:13:04.665 "dma_device_id": "system", 00:13:04.665 "dma_device_type": 1 00:13:04.665 }, 00:13:04.665 { 00:13:04.665 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:04.665 "dma_device_type": 2 00:13:04.665 }, 00:13:04.665 { 00:13:04.665 "dma_device_id": "system", 00:13:04.665 "dma_device_type": 1 00:13:04.665 }, 00:13:04.665 { 00:13:04.665 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:04.665 "dma_device_type": 2 00:13:04.665 } 00:13:04.665 ], 00:13:04.665 "driver_specific": { 00:13:04.665 "raid": { 00:13:04.665 "uuid": "84a0a9a6-14cc-44ee-a1eb-80444fc78872", 00:13:04.665 "strip_size_kb": 64, 00:13:04.665 "state": "online", 00:13:04.665 "raid_level": "concat", 00:13:04.665 "superblock": true, 00:13:04.665 "num_base_bdevs": 4, 00:13:04.665 "num_base_bdevs_discovered": 4, 00:13:04.665 "num_base_bdevs_operational": 4, 00:13:04.665 "base_bdevs_list": [ 00:13:04.665 { 00:13:04.665 "name": "pt1", 00:13:04.665 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:04.665 "is_configured": true, 00:13:04.665 "data_offset": 2048, 00:13:04.665 "data_size": 63488 00:13:04.665 }, 00:13:04.665 { 00:13:04.665 "name": "pt2", 00:13:04.665 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:04.665 "is_configured": true, 00:13:04.665 "data_offset": 2048, 00:13:04.665 "data_size": 63488 00:13:04.665 }, 00:13:04.665 { 00:13:04.665 "name": "pt3", 00:13:04.665 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:04.665 "is_configured": true, 00:13:04.665 "data_offset": 2048, 00:13:04.665 "data_size": 63488 00:13:04.665 }, 00:13:04.665 { 00:13:04.665 "name": "pt4", 00:13:04.665 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:04.665 "is_configured": true, 00:13:04.665 "data_offset": 2048, 00:13:04.665 "data_size": 63488 00:13:04.665 } 00:13:04.665 ] 00:13:04.665 } 00:13:04.665 } 00:13:04.665 }' 00:13:04.666 21:40:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:04.926 21:40:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:13:04.926 pt2 00:13:04.926 pt3 00:13:04.926 pt4' 00:13:04.926 21:40:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:04.926 21:40:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:04.926 21:40:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:04.926 21:40:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:13:04.926 21:40:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.926 21:40:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.926 21:40:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:04.926 21:40:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.926 21:40:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:04.926 21:40:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:04.926 21:40:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:04.926 21:40:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:13:04.926 21:40:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.926 21:40:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.926 21:40:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:04.926 21:40:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.926 21:40:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:04.926 21:40:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:04.926 21:40:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:04.926 21:40:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:13:04.926 21:40:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:04.926 21:40:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.926 21:40:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.926 21:40:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.926 21:40:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:04.926 21:40:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:04.926 21:40:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:04.926 21:40:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:13:04.926 21:40:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.926 21:40:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.926 21:40:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:04.926 21:40:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.185 21:40:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:05.185 21:40:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:05.185 21:40:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:05.185 21:40:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.185 21:40:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.185 21:40:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:13:05.186 [2024-12-10 21:40:05.732098] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:05.186 21:40:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.186 21:40:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 84a0a9a6-14cc-44ee-a1eb-80444fc78872 '!=' 84a0a9a6-14cc-44ee-a1eb-80444fc78872 ']' 00:13:05.186 21:40:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:13:05.186 21:40:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:05.186 21:40:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:13:05.186 21:40:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 72768 00:13:05.186 21:40:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 72768 ']' 00:13:05.186 21:40:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 72768 00:13:05.186 21:40:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:13:05.186 21:40:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:05.186 21:40:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72768 00:13:05.186 21:40:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:05.186 killing process with pid 72768 00:13:05.186 21:40:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:05.186 21:40:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72768' 00:13:05.186 21:40:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 72768 00:13:05.186 [2024-12-10 21:40:05.814194] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:05.186 [2024-12-10 21:40:05.814297] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:05.186 21:40:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 72768 00:13:05.186 [2024-12-10 21:40:05.814382] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:05.186 [2024-12-10 21:40:05.814393] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:13:05.755 [2024-12-10 21:40:06.300270] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:07.139 21:40:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:13:07.139 00:13:07.139 real 0m6.176s 00:13:07.139 user 0m8.803s 00:13:07.139 sys 0m0.972s 00:13:07.139 21:40:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:07.139 21:40:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.139 ************************************ 00:13:07.139 END TEST raid_superblock_test 00:13:07.139 ************************************ 00:13:07.139 21:40:07 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 4 read 00:13:07.139 21:40:07 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:13:07.139 21:40:07 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:07.139 21:40:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:07.139 ************************************ 00:13:07.139 START TEST raid_read_error_test 00:13:07.139 ************************************ 00:13:07.139 21:40:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 4 read 00:13:07.139 21:40:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:13:07.139 21:40:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:13:07.139 21:40:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:13:07.139 21:40:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:13:07.139 21:40:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:07.139 21:40:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:13:07.139 21:40:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:07.139 21:40:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:07.139 21:40:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:13:07.139 21:40:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:07.139 21:40:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:07.139 21:40:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:13:07.139 21:40:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:07.139 21:40:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:07.139 21:40:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:13:07.139 21:40:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:07.139 21:40:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:07.139 21:40:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:07.139 21:40:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:13:07.139 21:40:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:13:07.139 21:40:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:13:07.139 21:40:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:13:07.139 21:40:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:13:07.139 21:40:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:13:07.139 21:40:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:13:07.139 21:40:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:13:07.139 21:40:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:13:07.139 21:40:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:13:07.139 21:40:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.zelQuWFALg 00:13:07.139 21:40:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=73038 00:13:07.139 21:40:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:13:07.139 21:40:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 73038 00:13:07.139 21:40:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 73038 ']' 00:13:07.139 21:40:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:07.139 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:07.139 21:40:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:07.139 21:40:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:07.139 21:40:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:07.139 21:40:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.139 [2024-12-10 21:40:07.873396] Starting SPDK v25.01-pre git sha1 cec5ba284 / DPDK 24.03.0 initialization... 00:13:07.139 [2024-12-10 21:40:07.873549] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73038 ] 00:13:07.399 [2024-12-10 21:40:08.058973] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:07.659 [2024-12-10 21:40:08.194503] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:13:07.918 [2024-12-10 21:40:08.442760] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:07.918 [2024-12-10 21:40:08.442801] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:08.177 21:40:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:08.177 21:40:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:13:08.177 21:40:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:08.177 21:40:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:08.177 21:40:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.177 21:40:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.177 BaseBdev1_malloc 00:13:08.177 21:40:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.177 21:40:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:13:08.177 21:40:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.177 21:40:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.177 true 00:13:08.177 21:40:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.177 21:40:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:13:08.177 21:40:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.177 21:40:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.177 [2024-12-10 21:40:08.852174] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:13:08.177 [2024-12-10 21:40:08.852248] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:08.177 [2024-12-10 21:40:08.852273] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:13:08.177 [2024-12-10 21:40:08.852285] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:08.177 [2024-12-10 21:40:08.854731] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:08.177 [2024-12-10 21:40:08.854783] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:08.177 BaseBdev1 00:13:08.177 21:40:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.177 21:40:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:08.177 21:40:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:08.177 21:40:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.177 21:40:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.177 BaseBdev2_malloc 00:13:08.177 21:40:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.177 21:40:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:13:08.177 21:40:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.177 21:40:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.177 true 00:13:08.177 21:40:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.177 21:40:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:13:08.177 21:40:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.177 21:40:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.177 [2024-12-10 21:40:08.926908] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:13:08.177 [2024-12-10 21:40:08.926975] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:08.177 [2024-12-10 21:40:08.926996] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:13:08.177 [2024-12-10 21:40:08.927008] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:08.177 [2024-12-10 21:40:08.929521] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:08.177 [2024-12-10 21:40:08.929569] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:08.177 BaseBdev2 00:13:08.177 21:40:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.177 21:40:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:08.177 21:40:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:08.177 21:40:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.177 21:40:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.437 BaseBdev3_malloc 00:13:08.437 21:40:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.437 21:40:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:13:08.437 21:40:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.437 21:40:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.437 true 00:13:08.437 21:40:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.437 21:40:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:13:08.437 21:40:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.438 21:40:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.438 [2024-12-10 21:40:09.014902] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:13:08.438 [2024-12-10 21:40:09.014975] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:08.438 [2024-12-10 21:40:09.015008] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:13:08.438 [2024-12-10 21:40:09.015027] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:08.438 [2024-12-10 21:40:09.017703] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:08.438 [2024-12-10 21:40:09.017831] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:08.438 BaseBdev3 00:13:08.438 21:40:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.438 21:40:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:08.438 21:40:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:13:08.438 21:40:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.438 21:40:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.438 BaseBdev4_malloc 00:13:08.438 21:40:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.438 21:40:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:13:08.438 21:40:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.438 21:40:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.438 true 00:13:08.438 21:40:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.438 21:40:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:13:08.438 21:40:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.438 21:40:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.438 [2024-12-10 21:40:09.090909] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:13:08.438 [2024-12-10 21:40:09.091029] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:08.438 [2024-12-10 21:40:09.091084] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:13:08.438 [2024-12-10 21:40:09.091156] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:08.438 [2024-12-10 21:40:09.093677] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:08.438 [2024-12-10 21:40:09.093770] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:13:08.438 BaseBdev4 00:13:08.438 21:40:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.438 21:40:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:13:08.438 21:40:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.438 21:40:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.438 [2024-12-10 21:40:09.102975] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:08.438 [2024-12-10 21:40:09.105142] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:08.438 [2024-12-10 21:40:09.105279] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:08.438 [2024-12-10 21:40:09.105401] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:08.438 [2024-12-10 21:40:09.105747] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:13:08.438 [2024-12-10 21:40:09.105807] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:13:08.438 [2024-12-10 21:40:09.106146] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:13:08.438 [2024-12-10 21:40:09.106392] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:13:08.438 [2024-12-10 21:40:09.106456] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:13:08.438 [2024-12-10 21:40:09.106723] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:08.438 21:40:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.438 21:40:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:13:08.438 21:40:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:08.438 21:40:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:08.438 21:40:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:08.438 21:40:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:08.438 21:40:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:08.438 21:40:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:08.438 21:40:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:08.438 21:40:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:08.438 21:40:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:08.438 21:40:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:08.438 21:40:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:08.438 21:40:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.438 21:40:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.438 21:40:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.438 21:40:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:08.438 "name": "raid_bdev1", 00:13:08.438 "uuid": "e25db7bd-419c-4850-bb43-ecb2dfdbdfb0", 00:13:08.438 "strip_size_kb": 64, 00:13:08.438 "state": "online", 00:13:08.438 "raid_level": "concat", 00:13:08.438 "superblock": true, 00:13:08.438 "num_base_bdevs": 4, 00:13:08.438 "num_base_bdevs_discovered": 4, 00:13:08.438 "num_base_bdevs_operational": 4, 00:13:08.438 "base_bdevs_list": [ 00:13:08.438 { 00:13:08.438 "name": "BaseBdev1", 00:13:08.438 "uuid": "1bfa3e9b-6349-5a8f-a639-c64a3ac0771d", 00:13:08.438 "is_configured": true, 00:13:08.438 "data_offset": 2048, 00:13:08.438 "data_size": 63488 00:13:08.438 }, 00:13:08.438 { 00:13:08.438 "name": "BaseBdev2", 00:13:08.438 "uuid": "55d8ca13-b141-522e-a0ed-733c21a63e2f", 00:13:08.438 "is_configured": true, 00:13:08.438 "data_offset": 2048, 00:13:08.438 "data_size": 63488 00:13:08.438 }, 00:13:08.438 { 00:13:08.438 "name": "BaseBdev3", 00:13:08.438 "uuid": "e1a38671-1768-5e93-a67e-a5ba32d1bfd4", 00:13:08.438 "is_configured": true, 00:13:08.438 "data_offset": 2048, 00:13:08.438 "data_size": 63488 00:13:08.438 }, 00:13:08.438 { 00:13:08.438 "name": "BaseBdev4", 00:13:08.438 "uuid": "a34b72c0-f76a-5115-90cb-b977fbdd6509", 00:13:08.438 "is_configured": true, 00:13:08.438 "data_offset": 2048, 00:13:08.438 "data_size": 63488 00:13:08.438 } 00:13:08.438 ] 00:13:08.438 }' 00:13:08.438 21:40:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:08.438 21:40:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.010 21:40:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:13:09.010 21:40:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:09.010 [2024-12-10 21:40:09.699688] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:13:09.948 21:40:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:13:09.948 21:40:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.948 21:40:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.948 21:40:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.948 21:40:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:13:09.948 21:40:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:13:09.948 21:40:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:13:09.948 21:40:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:13:09.948 21:40:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:09.948 21:40:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:09.948 21:40:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:09.948 21:40:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:09.948 21:40:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:09.948 21:40:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:09.948 21:40:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:09.948 21:40:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:09.948 21:40:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:09.948 21:40:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:09.948 21:40:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:09.948 21:40:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.948 21:40:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.948 21:40:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.948 21:40:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:09.948 "name": "raid_bdev1", 00:13:09.948 "uuid": "e25db7bd-419c-4850-bb43-ecb2dfdbdfb0", 00:13:09.948 "strip_size_kb": 64, 00:13:09.948 "state": "online", 00:13:09.948 "raid_level": "concat", 00:13:09.948 "superblock": true, 00:13:09.948 "num_base_bdevs": 4, 00:13:09.948 "num_base_bdevs_discovered": 4, 00:13:09.948 "num_base_bdevs_operational": 4, 00:13:09.948 "base_bdevs_list": [ 00:13:09.948 { 00:13:09.948 "name": "BaseBdev1", 00:13:09.948 "uuid": "1bfa3e9b-6349-5a8f-a639-c64a3ac0771d", 00:13:09.948 "is_configured": true, 00:13:09.948 "data_offset": 2048, 00:13:09.948 "data_size": 63488 00:13:09.948 }, 00:13:09.948 { 00:13:09.948 "name": "BaseBdev2", 00:13:09.948 "uuid": "55d8ca13-b141-522e-a0ed-733c21a63e2f", 00:13:09.948 "is_configured": true, 00:13:09.948 "data_offset": 2048, 00:13:09.948 "data_size": 63488 00:13:09.948 }, 00:13:09.948 { 00:13:09.948 "name": "BaseBdev3", 00:13:09.948 "uuid": "e1a38671-1768-5e93-a67e-a5ba32d1bfd4", 00:13:09.948 "is_configured": true, 00:13:09.948 "data_offset": 2048, 00:13:09.948 "data_size": 63488 00:13:09.948 }, 00:13:09.948 { 00:13:09.948 "name": "BaseBdev4", 00:13:09.948 "uuid": "a34b72c0-f76a-5115-90cb-b977fbdd6509", 00:13:09.948 "is_configured": true, 00:13:09.948 "data_offset": 2048, 00:13:09.948 "data_size": 63488 00:13:09.948 } 00:13:09.948 ] 00:13:09.948 }' 00:13:09.948 21:40:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:09.949 21:40:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.517 21:40:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:10.517 21:40:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.517 21:40:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.517 [2024-12-10 21:40:11.056807] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:10.517 [2024-12-10 21:40:11.056927] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:10.517 [2024-12-10 21:40:11.060134] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:10.517 [2024-12-10 21:40:11.060269] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:10.517 [2024-12-10 21:40:11.060351] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:10.517 [2024-12-10 21:40:11.060415] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:13:10.517 { 00:13:10.517 "results": [ 00:13:10.517 { 00:13:10.517 "job": "raid_bdev1", 00:13:10.517 "core_mask": "0x1", 00:13:10.517 "workload": "randrw", 00:13:10.517 "percentage": 50, 00:13:10.517 "status": "finished", 00:13:10.517 "queue_depth": 1, 00:13:10.517 "io_size": 131072, 00:13:10.517 "runtime": 1.357861, 00:13:10.517 "iops": 12990.283983412146, 00:13:10.517 "mibps": 1623.7854979265182, 00:13:10.517 "io_failed": 1, 00:13:10.517 "io_timeout": 0, 00:13:10.517 "avg_latency_us": 106.38135400885244, 00:13:10.517 "min_latency_us": 29.736244541484716, 00:13:10.517 "max_latency_us": 1752.8733624454148 00:13:10.517 } 00:13:10.517 ], 00:13:10.517 "core_count": 1 00:13:10.517 } 00:13:10.517 21:40:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.517 21:40:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 73038 00:13:10.517 21:40:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 73038 ']' 00:13:10.517 21:40:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 73038 00:13:10.517 21:40:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:13:10.517 21:40:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:10.517 21:40:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73038 00:13:10.517 21:40:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:10.517 21:40:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:10.517 killing process with pid 73038 00:13:10.517 21:40:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73038' 00:13:10.517 21:40:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 73038 00:13:10.517 [2024-12-10 21:40:11.106286] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:10.517 21:40:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 73038 00:13:10.777 [2024-12-10 21:40:11.451014] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:12.156 21:40:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.zelQuWFALg 00:13:12.156 21:40:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:13:12.156 21:40:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:13:12.156 21:40:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:13:12.156 21:40:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:13:12.156 21:40:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:12.156 21:40:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:13:12.156 21:40:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:13:12.156 00:13:12.156 real 0m5.014s 00:13:12.156 user 0m5.960s 00:13:12.156 sys 0m0.638s 00:13:12.156 21:40:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:12.156 21:40:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.156 ************************************ 00:13:12.156 END TEST raid_read_error_test 00:13:12.156 ************************************ 00:13:12.156 21:40:12 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 4 write 00:13:12.156 21:40:12 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:13:12.156 21:40:12 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:12.156 21:40:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:12.156 ************************************ 00:13:12.156 START TEST raid_write_error_test 00:13:12.156 ************************************ 00:13:12.156 21:40:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 4 write 00:13:12.156 21:40:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:13:12.156 21:40:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:13:12.156 21:40:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:13:12.156 21:40:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:13:12.156 21:40:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:12.156 21:40:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:13:12.156 21:40:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:12.156 21:40:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:12.156 21:40:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:13:12.156 21:40:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:12.156 21:40:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:12.156 21:40:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:13:12.156 21:40:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:12.156 21:40:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:12.156 21:40:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:13:12.156 21:40:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:12.156 21:40:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:12.156 21:40:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:12.157 21:40:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:13:12.157 21:40:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:13:12.157 21:40:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:13:12.157 21:40:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:13:12.157 21:40:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:13:12.157 21:40:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:13:12.157 21:40:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:13:12.157 21:40:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:13:12.157 21:40:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:13:12.157 21:40:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:13:12.157 21:40:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.kKwt1WnKPb 00:13:12.157 21:40:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=73189 00:13:12.157 21:40:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:13:12.157 21:40:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 73189 00:13:12.157 21:40:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 73189 ']' 00:13:12.157 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:12.157 21:40:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:12.157 21:40:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:12.157 21:40:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:12.157 21:40:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:12.157 21:40:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.416 [2024-12-10 21:40:12.942736] Starting SPDK v25.01-pre git sha1 cec5ba284 / DPDK 24.03.0 initialization... 00:13:12.416 [2024-12-10 21:40:12.942957] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73189 ] 00:13:12.416 [2024-12-10 21:40:13.119692] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:12.675 [2024-12-10 21:40:13.256858] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:13:12.935 [2024-12-10 21:40:13.478231] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:12.935 [2024-12-10 21:40:13.478400] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:13.194 21:40:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:13.194 21:40:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:13:13.194 21:40:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:13.194 21:40:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:13.194 21:40:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.194 21:40:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.454 BaseBdev1_malloc 00:13:13.454 21:40:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.454 21:40:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:13:13.454 21:40:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.454 21:40:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.454 true 00:13:13.454 21:40:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.454 21:40:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:13:13.454 21:40:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.454 21:40:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.454 [2024-12-10 21:40:14.002326] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:13:13.454 [2024-12-10 21:40:14.002471] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:13.454 [2024-12-10 21:40:14.002518] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:13:13.454 [2024-12-10 21:40:14.002532] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:13.454 [2024-12-10 21:40:14.005014] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:13.454 [2024-12-10 21:40:14.005074] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:13.454 BaseBdev1 00:13:13.454 21:40:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.454 21:40:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:13.454 21:40:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:13.454 21:40:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.454 21:40:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.454 BaseBdev2_malloc 00:13:13.454 21:40:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.454 21:40:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:13:13.454 21:40:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.454 21:40:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.454 true 00:13:13.454 21:40:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.454 21:40:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:13:13.454 21:40:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.454 21:40:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.454 [2024-12-10 21:40:14.073965] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:13:13.454 [2024-12-10 21:40:14.074049] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:13.454 [2024-12-10 21:40:14.074079] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:13:13.454 [2024-12-10 21:40:14.074096] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:13.454 [2024-12-10 21:40:14.077144] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:13.454 [2024-12-10 21:40:14.077270] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:13.454 BaseBdev2 00:13:13.454 21:40:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.454 21:40:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:13.454 21:40:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:13.454 21:40:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.454 21:40:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.454 BaseBdev3_malloc 00:13:13.454 21:40:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.454 21:40:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:13:13.454 21:40:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.454 21:40:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.454 true 00:13:13.454 21:40:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.454 21:40:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:13:13.454 21:40:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.454 21:40:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.454 [2024-12-10 21:40:14.157790] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:13:13.454 [2024-12-10 21:40:14.157856] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:13.454 [2024-12-10 21:40:14.157880] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:13:13.454 [2024-12-10 21:40:14.157893] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:13.454 [2024-12-10 21:40:14.160476] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:13.454 [2024-12-10 21:40:14.160522] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:13.454 BaseBdev3 00:13:13.454 21:40:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.454 21:40:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:13.454 21:40:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:13:13.454 21:40:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.454 21:40:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.454 BaseBdev4_malloc 00:13:13.454 21:40:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.454 21:40:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:13:13.454 21:40:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.454 21:40:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.454 true 00:13:13.454 21:40:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.454 21:40:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:13:13.454 21:40:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.454 21:40:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.454 [2024-12-10 21:40:14.229102] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:13:13.454 [2024-12-10 21:40:14.229160] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:13.454 [2024-12-10 21:40:14.229181] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:13:13.454 [2024-12-10 21:40:14.229191] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:13.454 [2024-12-10 21:40:14.231487] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:13.454 [2024-12-10 21:40:14.231528] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:13:13.714 BaseBdev4 00:13:13.714 21:40:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.714 21:40:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:13:13.714 21:40:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.714 21:40:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.714 [2024-12-10 21:40:14.241142] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:13.714 [2024-12-10 21:40:14.243070] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:13.714 [2024-12-10 21:40:14.243240] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:13.714 [2024-12-10 21:40:14.243309] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:13.714 [2024-12-10 21:40:14.243572] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:13:13.714 [2024-12-10 21:40:14.243591] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:13:13.714 [2024-12-10 21:40:14.243897] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:13:13.714 [2024-12-10 21:40:14.244079] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:13:13.714 [2024-12-10 21:40:14.244092] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:13:13.714 [2024-12-10 21:40:14.244273] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:13.714 21:40:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.714 21:40:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:13:13.714 21:40:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:13.714 21:40:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:13.714 21:40:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:13.714 21:40:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:13.714 21:40:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:13.714 21:40:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:13.714 21:40:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:13.714 21:40:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:13.714 21:40:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:13.714 21:40:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:13.714 21:40:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:13.714 21:40:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.714 21:40:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.714 21:40:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.714 21:40:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:13.714 "name": "raid_bdev1", 00:13:13.714 "uuid": "60771bf3-3215-4686-b72a-55e86366d2c6", 00:13:13.714 "strip_size_kb": 64, 00:13:13.714 "state": "online", 00:13:13.714 "raid_level": "concat", 00:13:13.714 "superblock": true, 00:13:13.714 "num_base_bdevs": 4, 00:13:13.714 "num_base_bdevs_discovered": 4, 00:13:13.714 "num_base_bdevs_operational": 4, 00:13:13.714 "base_bdevs_list": [ 00:13:13.714 { 00:13:13.714 "name": "BaseBdev1", 00:13:13.714 "uuid": "663863f1-a45d-5a02-af39-c00c2d5e419e", 00:13:13.714 "is_configured": true, 00:13:13.714 "data_offset": 2048, 00:13:13.714 "data_size": 63488 00:13:13.714 }, 00:13:13.715 { 00:13:13.715 "name": "BaseBdev2", 00:13:13.715 "uuid": "9fad4cc3-1775-54fb-b8bf-d6ada62a3b69", 00:13:13.715 "is_configured": true, 00:13:13.715 "data_offset": 2048, 00:13:13.715 "data_size": 63488 00:13:13.715 }, 00:13:13.715 { 00:13:13.715 "name": "BaseBdev3", 00:13:13.715 "uuid": "9a731f7e-0883-5aaf-8da0-62444aa8c134", 00:13:13.715 "is_configured": true, 00:13:13.715 "data_offset": 2048, 00:13:13.715 "data_size": 63488 00:13:13.715 }, 00:13:13.715 { 00:13:13.715 "name": "BaseBdev4", 00:13:13.715 "uuid": "b05151e4-0baf-53b1-9dc0-cc8834d1908d", 00:13:13.715 "is_configured": true, 00:13:13.715 "data_offset": 2048, 00:13:13.715 "data_size": 63488 00:13:13.715 } 00:13:13.715 ] 00:13:13.715 }' 00:13:13.715 21:40:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:13.715 21:40:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.974 21:40:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:13:13.974 21:40:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:14.233 [2024-12-10 21:40:14.849543] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:13:15.172 21:40:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:13:15.172 21:40:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.172 21:40:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.172 21:40:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.172 21:40:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:13:15.172 21:40:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:13:15.172 21:40:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:13:15.172 21:40:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:13:15.172 21:40:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:15.172 21:40:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:15.172 21:40:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:15.172 21:40:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:15.172 21:40:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:15.172 21:40:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:15.172 21:40:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:15.172 21:40:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:15.172 21:40:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:15.172 21:40:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:15.172 21:40:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.172 21:40:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:15.172 21:40:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.172 21:40:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.172 21:40:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:15.172 "name": "raid_bdev1", 00:13:15.172 "uuid": "60771bf3-3215-4686-b72a-55e86366d2c6", 00:13:15.172 "strip_size_kb": 64, 00:13:15.172 "state": "online", 00:13:15.172 "raid_level": "concat", 00:13:15.172 "superblock": true, 00:13:15.172 "num_base_bdevs": 4, 00:13:15.172 "num_base_bdevs_discovered": 4, 00:13:15.172 "num_base_bdevs_operational": 4, 00:13:15.172 "base_bdevs_list": [ 00:13:15.172 { 00:13:15.172 "name": "BaseBdev1", 00:13:15.172 "uuid": "663863f1-a45d-5a02-af39-c00c2d5e419e", 00:13:15.172 "is_configured": true, 00:13:15.172 "data_offset": 2048, 00:13:15.172 "data_size": 63488 00:13:15.172 }, 00:13:15.172 { 00:13:15.172 "name": "BaseBdev2", 00:13:15.172 "uuid": "9fad4cc3-1775-54fb-b8bf-d6ada62a3b69", 00:13:15.172 "is_configured": true, 00:13:15.172 "data_offset": 2048, 00:13:15.172 "data_size": 63488 00:13:15.172 }, 00:13:15.172 { 00:13:15.172 "name": "BaseBdev3", 00:13:15.172 "uuid": "9a731f7e-0883-5aaf-8da0-62444aa8c134", 00:13:15.172 "is_configured": true, 00:13:15.172 "data_offset": 2048, 00:13:15.172 "data_size": 63488 00:13:15.172 }, 00:13:15.172 { 00:13:15.172 "name": "BaseBdev4", 00:13:15.172 "uuid": "b05151e4-0baf-53b1-9dc0-cc8834d1908d", 00:13:15.172 "is_configured": true, 00:13:15.172 "data_offset": 2048, 00:13:15.172 "data_size": 63488 00:13:15.172 } 00:13:15.173 ] 00:13:15.173 }' 00:13:15.173 21:40:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:15.173 21:40:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.741 21:40:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:15.741 21:40:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.741 21:40:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.741 [2024-12-10 21:40:16.226571] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:15.741 [2024-12-10 21:40:16.226610] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:15.741 [2024-12-10 21:40:16.229749] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:15.741 [2024-12-10 21:40:16.229882] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:15.741 [2024-12-10 21:40:16.229955] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:15.741 [2024-12-10 21:40:16.229972] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:13:15.741 { 00:13:15.741 "results": [ 00:13:15.741 { 00:13:15.741 "job": "raid_bdev1", 00:13:15.741 "core_mask": "0x1", 00:13:15.741 "workload": "randrw", 00:13:15.741 "percentage": 50, 00:13:15.741 "status": "finished", 00:13:15.741 "queue_depth": 1, 00:13:15.741 "io_size": 131072, 00:13:15.741 "runtime": 1.37769, 00:13:15.741 "iops": 13463.115795280506, 00:13:15.741 "mibps": 1682.8894744100633, 00:13:15.741 "io_failed": 1, 00:13:15.741 "io_timeout": 0, 00:13:15.741 "avg_latency_us": 102.70692486629889, 00:13:15.741 "min_latency_us": 28.618340611353712, 00:13:15.741 "max_latency_us": 1724.2550218340612 00:13:15.741 } 00:13:15.741 ], 00:13:15.741 "core_count": 1 00:13:15.741 } 00:13:15.741 21:40:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.741 21:40:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 73189 00:13:15.741 21:40:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 73189 ']' 00:13:15.741 21:40:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 73189 00:13:15.741 21:40:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:13:15.741 21:40:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:15.741 21:40:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73189 00:13:15.741 killing process with pid 73189 00:13:15.741 21:40:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:15.741 21:40:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:15.741 21:40:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73189' 00:13:15.741 21:40:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 73189 00:13:15.741 [2024-12-10 21:40:16.278912] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:15.741 21:40:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 73189 00:13:15.998 [2024-12-10 21:40:16.643313] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:17.377 21:40:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:13:17.377 21:40:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.kKwt1WnKPb 00:13:17.377 21:40:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:13:17.377 21:40:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:13:17.377 21:40:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:13:17.377 21:40:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:17.377 21:40:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:13:17.377 21:40:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:13:17.377 ************************************ 00:13:17.377 END TEST raid_write_error_test 00:13:17.377 ************************************ 00:13:17.377 00:13:17.377 real 0m5.099s 00:13:17.377 user 0m6.178s 00:13:17.377 sys 0m0.614s 00:13:17.377 21:40:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:17.377 21:40:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.377 21:40:17 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:13:17.377 21:40:17 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 4 false 00:13:17.377 21:40:17 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:13:17.377 21:40:17 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:17.377 21:40:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:17.377 ************************************ 00:13:17.377 START TEST raid_state_function_test 00:13:17.377 ************************************ 00:13:17.377 21:40:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 4 false 00:13:17.377 21:40:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:13:17.377 21:40:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:13:17.377 21:40:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:13:17.377 21:40:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:13:17.377 21:40:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:13:17.377 21:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:17.377 21:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:13:17.377 21:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:17.377 21:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:17.377 21:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:13:17.377 21:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:17.377 21:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:17.377 21:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:13:17.377 21:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:17.377 21:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:17.377 21:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:13:17.377 21:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:17.377 21:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:17.377 21:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:17.377 21:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:13:17.377 21:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:13:17.377 21:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:13:17.377 21:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:13:17.377 21:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:13:17.377 21:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:13:17.377 21:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:13:17.377 21:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:13:17.377 21:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:13:17.377 Process raid pid: 73333 00:13:17.378 21:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=73333 00:13:17.378 21:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:13:17.378 21:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 73333' 00:13:17.378 21:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 73333 00:13:17.378 21:40:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 73333 ']' 00:13:17.378 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:17.378 21:40:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:17.378 21:40:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:17.378 21:40:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:17.378 21:40:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:17.378 21:40:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.378 [2024-12-10 21:40:18.099791] Starting SPDK v25.01-pre git sha1 cec5ba284 / DPDK 24.03.0 initialization... 00:13:17.378 [2024-12-10 21:40:18.099930] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:17.637 [2024-12-10 21:40:18.277089] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:17.637 [2024-12-10 21:40:18.404427] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:13:17.895 [2024-12-10 21:40:18.614186] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:17.895 [2024-12-10 21:40:18.614224] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:18.461 21:40:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:18.461 21:40:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:13:18.461 21:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:18.461 21:40:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.461 21:40:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.461 [2024-12-10 21:40:18.951194] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:18.461 [2024-12-10 21:40:18.951257] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:18.461 [2024-12-10 21:40:18.951268] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:18.461 [2024-12-10 21:40:18.951280] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:18.461 [2024-12-10 21:40:18.951287] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:18.461 [2024-12-10 21:40:18.951297] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:18.461 [2024-12-10 21:40:18.951304] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:18.461 [2024-12-10 21:40:18.951314] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:18.461 21:40:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.461 21:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:18.461 21:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:18.461 21:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:18.461 21:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:18.461 21:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:18.461 21:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:18.461 21:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:18.461 21:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:18.461 21:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:18.461 21:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:18.461 21:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:18.461 21:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:18.461 21:40:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.461 21:40:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.461 21:40:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.461 21:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:18.461 "name": "Existed_Raid", 00:13:18.461 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:18.461 "strip_size_kb": 0, 00:13:18.461 "state": "configuring", 00:13:18.461 "raid_level": "raid1", 00:13:18.461 "superblock": false, 00:13:18.461 "num_base_bdevs": 4, 00:13:18.462 "num_base_bdevs_discovered": 0, 00:13:18.462 "num_base_bdevs_operational": 4, 00:13:18.462 "base_bdevs_list": [ 00:13:18.462 { 00:13:18.462 "name": "BaseBdev1", 00:13:18.462 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:18.462 "is_configured": false, 00:13:18.462 "data_offset": 0, 00:13:18.462 "data_size": 0 00:13:18.462 }, 00:13:18.462 { 00:13:18.462 "name": "BaseBdev2", 00:13:18.462 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:18.462 "is_configured": false, 00:13:18.462 "data_offset": 0, 00:13:18.462 "data_size": 0 00:13:18.462 }, 00:13:18.462 { 00:13:18.462 "name": "BaseBdev3", 00:13:18.462 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:18.462 "is_configured": false, 00:13:18.462 "data_offset": 0, 00:13:18.462 "data_size": 0 00:13:18.462 }, 00:13:18.462 { 00:13:18.462 "name": "BaseBdev4", 00:13:18.462 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:18.462 "is_configured": false, 00:13:18.462 "data_offset": 0, 00:13:18.462 "data_size": 0 00:13:18.462 } 00:13:18.462 ] 00:13:18.462 }' 00:13:18.462 21:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:18.462 21:40:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.720 21:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:18.720 21:40:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.720 21:40:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.720 [2024-12-10 21:40:19.410358] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:18.720 [2024-12-10 21:40:19.410514] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:13:18.720 21:40:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.720 21:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:18.720 21:40:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.720 21:40:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.720 [2024-12-10 21:40:19.422341] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:18.720 [2024-12-10 21:40:19.422455] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:18.720 [2024-12-10 21:40:19.422489] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:18.720 [2024-12-10 21:40:19.422514] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:18.720 [2024-12-10 21:40:19.422532] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:18.720 [2024-12-10 21:40:19.422553] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:18.720 [2024-12-10 21:40:19.422604] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:18.720 [2024-12-10 21:40:19.422640] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:18.720 21:40:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.720 21:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:18.721 21:40:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.721 21:40:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.721 [2024-12-10 21:40:19.474322] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:18.721 BaseBdev1 00:13:18.721 21:40:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.721 21:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:13:18.721 21:40:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:13:18.721 21:40:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:18.721 21:40:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:18.721 21:40:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:18.721 21:40:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:18.721 21:40:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:18.721 21:40:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.721 21:40:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.721 21:40:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.721 21:40:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:18.721 21:40:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.721 21:40:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.721 [ 00:13:18.721 { 00:13:18.721 "name": "BaseBdev1", 00:13:18.721 "aliases": [ 00:13:18.721 "a60eaa4f-4781-409d-a2bd-6a38a7adaf87" 00:13:18.721 ], 00:13:18.721 "product_name": "Malloc disk", 00:13:18.721 "block_size": 512, 00:13:18.721 "num_blocks": 65536, 00:13:18.721 "uuid": "a60eaa4f-4781-409d-a2bd-6a38a7adaf87", 00:13:18.980 "assigned_rate_limits": { 00:13:18.980 "rw_ios_per_sec": 0, 00:13:18.980 "rw_mbytes_per_sec": 0, 00:13:18.980 "r_mbytes_per_sec": 0, 00:13:18.980 "w_mbytes_per_sec": 0 00:13:18.980 }, 00:13:18.980 "claimed": true, 00:13:18.980 "claim_type": "exclusive_write", 00:13:18.980 "zoned": false, 00:13:18.980 "supported_io_types": { 00:13:18.980 "read": true, 00:13:18.980 "write": true, 00:13:18.980 "unmap": true, 00:13:18.980 "flush": true, 00:13:18.980 "reset": true, 00:13:18.980 "nvme_admin": false, 00:13:18.980 "nvme_io": false, 00:13:18.980 "nvme_io_md": false, 00:13:18.980 "write_zeroes": true, 00:13:18.980 "zcopy": true, 00:13:18.980 "get_zone_info": false, 00:13:18.980 "zone_management": false, 00:13:18.980 "zone_append": false, 00:13:18.980 "compare": false, 00:13:18.980 "compare_and_write": false, 00:13:18.980 "abort": true, 00:13:18.980 "seek_hole": false, 00:13:18.980 "seek_data": false, 00:13:18.980 "copy": true, 00:13:18.980 "nvme_iov_md": false 00:13:18.980 }, 00:13:18.980 "memory_domains": [ 00:13:18.980 { 00:13:18.980 "dma_device_id": "system", 00:13:18.980 "dma_device_type": 1 00:13:18.980 }, 00:13:18.980 { 00:13:18.980 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:18.980 "dma_device_type": 2 00:13:18.980 } 00:13:18.980 ], 00:13:18.980 "driver_specific": {} 00:13:18.980 } 00:13:18.980 ] 00:13:18.980 21:40:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.980 21:40:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:18.980 21:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:18.980 21:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:18.980 21:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:18.980 21:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:18.980 21:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:18.980 21:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:18.980 21:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:18.980 21:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:18.980 21:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:18.980 21:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:18.980 21:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:18.980 21:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:18.980 21:40:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.980 21:40:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.980 21:40:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.980 21:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:18.980 "name": "Existed_Raid", 00:13:18.980 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:18.980 "strip_size_kb": 0, 00:13:18.980 "state": "configuring", 00:13:18.980 "raid_level": "raid1", 00:13:18.980 "superblock": false, 00:13:18.980 "num_base_bdevs": 4, 00:13:18.980 "num_base_bdevs_discovered": 1, 00:13:18.980 "num_base_bdevs_operational": 4, 00:13:18.980 "base_bdevs_list": [ 00:13:18.980 { 00:13:18.980 "name": "BaseBdev1", 00:13:18.980 "uuid": "a60eaa4f-4781-409d-a2bd-6a38a7adaf87", 00:13:18.980 "is_configured": true, 00:13:18.980 "data_offset": 0, 00:13:18.980 "data_size": 65536 00:13:18.980 }, 00:13:18.980 { 00:13:18.980 "name": "BaseBdev2", 00:13:18.980 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:18.980 "is_configured": false, 00:13:18.980 "data_offset": 0, 00:13:18.980 "data_size": 0 00:13:18.980 }, 00:13:18.980 { 00:13:18.980 "name": "BaseBdev3", 00:13:18.980 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:18.980 "is_configured": false, 00:13:18.980 "data_offset": 0, 00:13:18.980 "data_size": 0 00:13:18.980 }, 00:13:18.980 { 00:13:18.980 "name": "BaseBdev4", 00:13:18.980 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:18.980 "is_configured": false, 00:13:18.980 "data_offset": 0, 00:13:18.980 "data_size": 0 00:13:18.980 } 00:13:18.980 ] 00:13:18.980 }' 00:13:18.980 21:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:18.980 21:40:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.241 21:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:19.241 21:40:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.241 21:40:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.241 [2024-12-10 21:40:19.997508] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:19.241 [2024-12-10 21:40:19.997644] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:13:19.241 21:40:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.241 21:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:19.241 21:40:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.241 21:40:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.241 [2024-12-10 21:40:20.009574] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:19.241 [2024-12-10 21:40:20.011691] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:19.241 [2024-12-10 21:40:20.011756] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:19.241 [2024-12-10 21:40:20.011768] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:19.241 [2024-12-10 21:40:20.011781] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:19.241 [2024-12-10 21:40:20.011789] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:19.241 [2024-12-10 21:40:20.011799] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:19.241 21:40:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.241 21:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:13:19.241 21:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:19.241 21:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:19.241 21:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:19.241 21:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:19.241 21:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:19.241 21:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:19.241 21:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:19.241 21:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:19.241 21:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:19.241 21:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:19.241 21:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:19.241 21:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:19.507 21:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:19.507 21:40:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.507 21:40:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.507 21:40:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.507 21:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:19.507 "name": "Existed_Raid", 00:13:19.507 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:19.507 "strip_size_kb": 0, 00:13:19.507 "state": "configuring", 00:13:19.507 "raid_level": "raid1", 00:13:19.507 "superblock": false, 00:13:19.507 "num_base_bdevs": 4, 00:13:19.507 "num_base_bdevs_discovered": 1, 00:13:19.507 "num_base_bdevs_operational": 4, 00:13:19.507 "base_bdevs_list": [ 00:13:19.507 { 00:13:19.507 "name": "BaseBdev1", 00:13:19.507 "uuid": "a60eaa4f-4781-409d-a2bd-6a38a7adaf87", 00:13:19.507 "is_configured": true, 00:13:19.507 "data_offset": 0, 00:13:19.507 "data_size": 65536 00:13:19.507 }, 00:13:19.507 { 00:13:19.507 "name": "BaseBdev2", 00:13:19.507 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:19.507 "is_configured": false, 00:13:19.507 "data_offset": 0, 00:13:19.507 "data_size": 0 00:13:19.508 }, 00:13:19.508 { 00:13:19.508 "name": "BaseBdev3", 00:13:19.508 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:19.508 "is_configured": false, 00:13:19.508 "data_offset": 0, 00:13:19.508 "data_size": 0 00:13:19.508 }, 00:13:19.508 { 00:13:19.508 "name": "BaseBdev4", 00:13:19.508 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:19.508 "is_configured": false, 00:13:19.508 "data_offset": 0, 00:13:19.508 "data_size": 0 00:13:19.508 } 00:13:19.508 ] 00:13:19.508 }' 00:13:19.508 21:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:19.508 21:40:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.767 21:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:19.767 21:40:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.767 21:40:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.767 [2024-12-10 21:40:20.471347] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:19.767 BaseBdev2 00:13:19.767 21:40:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.767 21:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:13:19.767 21:40:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:13:19.767 21:40:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:19.767 21:40:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:19.767 21:40:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:19.767 21:40:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:19.767 21:40:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:19.767 21:40:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.767 21:40:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.767 21:40:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.767 21:40:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:19.767 21:40:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.767 21:40:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.767 [ 00:13:19.767 { 00:13:19.767 "name": "BaseBdev2", 00:13:19.767 "aliases": [ 00:13:19.767 "701282f1-92e9-4c5d-aa60-a21db7f9536e" 00:13:19.767 ], 00:13:19.767 "product_name": "Malloc disk", 00:13:19.767 "block_size": 512, 00:13:19.767 "num_blocks": 65536, 00:13:19.767 "uuid": "701282f1-92e9-4c5d-aa60-a21db7f9536e", 00:13:19.767 "assigned_rate_limits": { 00:13:19.767 "rw_ios_per_sec": 0, 00:13:19.767 "rw_mbytes_per_sec": 0, 00:13:19.767 "r_mbytes_per_sec": 0, 00:13:19.767 "w_mbytes_per_sec": 0 00:13:19.767 }, 00:13:19.767 "claimed": true, 00:13:19.767 "claim_type": "exclusive_write", 00:13:19.767 "zoned": false, 00:13:19.767 "supported_io_types": { 00:13:19.767 "read": true, 00:13:19.767 "write": true, 00:13:19.767 "unmap": true, 00:13:19.767 "flush": true, 00:13:19.767 "reset": true, 00:13:19.767 "nvme_admin": false, 00:13:19.767 "nvme_io": false, 00:13:19.767 "nvme_io_md": false, 00:13:19.768 "write_zeroes": true, 00:13:19.768 "zcopy": true, 00:13:19.768 "get_zone_info": false, 00:13:19.768 "zone_management": false, 00:13:19.768 "zone_append": false, 00:13:19.768 "compare": false, 00:13:19.768 "compare_and_write": false, 00:13:19.768 "abort": true, 00:13:19.768 "seek_hole": false, 00:13:19.768 "seek_data": false, 00:13:19.768 "copy": true, 00:13:19.768 "nvme_iov_md": false 00:13:19.768 }, 00:13:19.768 "memory_domains": [ 00:13:19.768 { 00:13:19.768 "dma_device_id": "system", 00:13:19.768 "dma_device_type": 1 00:13:19.768 }, 00:13:19.768 { 00:13:19.768 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:19.768 "dma_device_type": 2 00:13:19.768 } 00:13:19.768 ], 00:13:19.768 "driver_specific": {} 00:13:19.768 } 00:13:19.768 ] 00:13:19.768 21:40:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.768 21:40:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:19.768 21:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:19.768 21:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:19.768 21:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:19.768 21:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:19.768 21:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:19.768 21:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:19.768 21:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:19.768 21:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:19.768 21:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:19.768 21:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:19.768 21:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:19.768 21:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:19.768 21:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:19.768 21:40:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.768 21:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:19.768 21:40:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.768 21:40:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.076 21:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:20.076 "name": "Existed_Raid", 00:13:20.076 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:20.076 "strip_size_kb": 0, 00:13:20.076 "state": "configuring", 00:13:20.076 "raid_level": "raid1", 00:13:20.076 "superblock": false, 00:13:20.076 "num_base_bdevs": 4, 00:13:20.076 "num_base_bdevs_discovered": 2, 00:13:20.076 "num_base_bdevs_operational": 4, 00:13:20.076 "base_bdevs_list": [ 00:13:20.076 { 00:13:20.076 "name": "BaseBdev1", 00:13:20.076 "uuid": "a60eaa4f-4781-409d-a2bd-6a38a7adaf87", 00:13:20.076 "is_configured": true, 00:13:20.076 "data_offset": 0, 00:13:20.076 "data_size": 65536 00:13:20.076 }, 00:13:20.076 { 00:13:20.076 "name": "BaseBdev2", 00:13:20.076 "uuid": "701282f1-92e9-4c5d-aa60-a21db7f9536e", 00:13:20.076 "is_configured": true, 00:13:20.076 "data_offset": 0, 00:13:20.076 "data_size": 65536 00:13:20.076 }, 00:13:20.076 { 00:13:20.076 "name": "BaseBdev3", 00:13:20.076 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:20.076 "is_configured": false, 00:13:20.076 "data_offset": 0, 00:13:20.076 "data_size": 0 00:13:20.076 }, 00:13:20.076 { 00:13:20.077 "name": "BaseBdev4", 00:13:20.077 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:20.077 "is_configured": false, 00:13:20.077 "data_offset": 0, 00:13:20.077 "data_size": 0 00:13:20.077 } 00:13:20.077 ] 00:13:20.077 }' 00:13:20.077 21:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:20.077 21:40:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.334 21:40:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:20.334 21:40:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.334 21:40:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.334 [2024-12-10 21:40:21.062803] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:20.334 BaseBdev3 00:13:20.334 21:40:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.334 21:40:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:13:20.334 21:40:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:13:20.334 21:40:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:20.334 21:40:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:20.334 21:40:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:20.334 21:40:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:20.334 21:40:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:20.334 21:40:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.334 21:40:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.334 21:40:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.334 21:40:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:20.334 21:40:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.334 21:40:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.334 [ 00:13:20.334 { 00:13:20.334 "name": "BaseBdev3", 00:13:20.334 "aliases": [ 00:13:20.334 "8643d8cd-3ff6-4e45-b481-e2aa586b2c76" 00:13:20.334 ], 00:13:20.334 "product_name": "Malloc disk", 00:13:20.334 "block_size": 512, 00:13:20.334 "num_blocks": 65536, 00:13:20.334 "uuid": "8643d8cd-3ff6-4e45-b481-e2aa586b2c76", 00:13:20.334 "assigned_rate_limits": { 00:13:20.334 "rw_ios_per_sec": 0, 00:13:20.334 "rw_mbytes_per_sec": 0, 00:13:20.334 "r_mbytes_per_sec": 0, 00:13:20.334 "w_mbytes_per_sec": 0 00:13:20.335 }, 00:13:20.335 "claimed": true, 00:13:20.335 "claim_type": "exclusive_write", 00:13:20.335 "zoned": false, 00:13:20.335 "supported_io_types": { 00:13:20.335 "read": true, 00:13:20.335 "write": true, 00:13:20.335 "unmap": true, 00:13:20.335 "flush": true, 00:13:20.335 "reset": true, 00:13:20.335 "nvme_admin": false, 00:13:20.335 "nvme_io": false, 00:13:20.335 "nvme_io_md": false, 00:13:20.335 "write_zeroes": true, 00:13:20.335 "zcopy": true, 00:13:20.335 "get_zone_info": false, 00:13:20.335 "zone_management": false, 00:13:20.335 "zone_append": false, 00:13:20.335 "compare": false, 00:13:20.335 "compare_and_write": false, 00:13:20.335 "abort": true, 00:13:20.335 "seek_hole": false, 00:13:20.335 "seek_data": false, 00:13:20.335 "copy": true, 00:13:20.335 "nvme_iov_md": false 00:13:20.335 }, 00:13:20.335 "memory_domains": [ 00:13:20.335 { 00:13:20.335 "dma_device_id": "system", 00:13:20.335 "dma_device_type": 1 00:13:20.335 }, 00:13:20.335 { 00:13:20.335 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:20.335 "dma_device_type": 2 00:13:20.335 } 00:13:20.335 ], 00:13:20.335 "driver_specific": {} 00:13:20.335 } 00:13:20.335 ] 00:13:20.335 21:40:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.335 21:40:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:20.335 21:40:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:20.335 21:40:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:20.335 21:40:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:20.335 21:40:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:20.335 21:40:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:20.335 21:40:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:20.335 21:40:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:20.335 21:40:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:20.335 21:40:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:20.335 21:40:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:20.335 21:40:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:20.335 21:40:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:20.335 21:40:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:20.335 21:40:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:20.335 21:40:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.335 21:40:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.593 21:40:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.593 21:40:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:20.593 "name": "Existed_Raid", 00:13:20.593 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:20.593 "strip_size_kb": 0, 00:13:20.593 "state": "configuring", 00:13:20.593 "raid_level": "raid1", 00:13:20.593 "superblock": false, 00:13:20.593 "num_base_bdevs": 4, 00:13:20.593 "num_base_bdevs_discovered": 3, 00:13:20.593 "num_base_bdevs_operational": 4, 00:13:20.593 "base_bdevs_list": [ 00:13:20.593 { 00:13:20.593 "name": "BaseBdev1", 00:13:20.593 "uuid": "a60eaa4f-4781-409d-a2bd-6a38a7adaf87", 00:13:20.593 "is_configured": true, 00:13:20.593 "data_offset": 0, 00:13:20.593 "data_size": 65536 00:13:20.593 }, 00:13:20.593 { 00:13:20.593 "name": "BaseBdev2", 00:13:20.593 "uuid": "701282f1-92e9-4c5d-aa60-a21db7f9536e", 00:13:20.593 "is_configured": true, 00:13:20.593 "data_offset": 0, 00:13:20.593 "data_size": 65536 00:13:20.593 }, 00:13:20.593 { 00:13:20.593 "name": "BaseBdev3", 00:13:20.593 "uuid": "8643d8cd-3ff6-4e45-b481-e2aa586b2c76", 00:13:20.593 "is_configured": true, 00:13:20.593 "data_offset": 0, 00:13:20.593 "data_size": 65536 00:13:20.593 }, 00:13:20.593 { 00:13:20.593 "name": "BaseBdev4", 00:13:20.593 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:20.593 "is_configured": false, 00:13:20.593 "data_offset": 0, 00:13:20.593 "data_size": 0 00:13:20.593 } 00:13:20.593 ] 00:13:20.593 }' 00:13:20.593 21:40:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:20.593 21:40:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.852 21:40:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:13:20.852 21:40:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.852 21:40:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.852 [2024-12-10 21:40:21.605774] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:20.852 [2024-12-10 21:40:21.605902] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:13:20.852 [2024-12-10 21:40:21.605931] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:13:20.852 [2024-12-10 21:40:21.606289] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:13:20.852 [2024-12-10 21:40:21.606558] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:13:20.852 [2024-12-10 21:40:21.606617] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:13:20.852 [2024-12-10 21:40:21.606979] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:20.852 BaseBdev4 00:13:20.852 21:40:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.852 21:40:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:13:20.852 21:40:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:13:20.852 21:40:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:20.852 21:40:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:20.852 21:40:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:20.852 21:40:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:20.852 21:40:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:20.852 21:40:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.852 21:40:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.852 21:40:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.852 21:40:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:13:20.852 21:40:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.852 21:40:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.852 [ 00:13:20.852 { 00:13:20.852 "name": "BaseBdev4", 00:13:20.852 "aliases": [ 00:13:20.852 "a8813a8f-4ef7-40c4-ae12-0be28a89395f" 00:13:20.852 ], 00:13:20.852 "product_name": "Malloc disk", 00:13:20.852 "block_size": 512, 00:13:20.852 "num_blocks": 65536, 00:13:20.852 "uuid": "a8813a8f-4ef7-40c4-ae12-0be28a89395f", 00:13:20.852 "assigned_rate_limits": { 00:13:20.852 "rw_ios_per_sec": 0, 00:13:20.852 "rw_mbytes_per_sec": 0, 00:13:20.852 "r_mbytes_per_sec": 0, 00:13:20.852 "w_mbytes_per_sec": 0 00:13:20.852 }, 00:13:20.852 "claimed": true, 00:13:20.852 "claim_type": "exclusive_write", 00:13:20.852 "zoned": false, 00:13:20.852 "supported_io_types": { 00:13:20.852 "read": true, 00:13:20.852 "write": true, 00:13:20.852 "unmap": true, 00:13:20.852 "flush": true, 00:13:20.852 "reset": true, 00:13:20.852 "nvme_admin": false, 00:13:20.852 "nvme_io": false, 00:13:20.852 "nvme_io_md": false, 00:13:20.852 "write_zeroes": true, 00:13:20.852 "zcopy": true, 00:13:21.111 "get_zone_info": false, 00:13:21.111 "zone_management": false, 00:13:21.111 "zone_append": false, 00:13:21.111 "compare": false, 00:13:21.111 "compare_and_write": false, 00:13:21.111 "abort": true, 00:13:21.111 "seek_hole": false, 00:13:21.111 "seek_data": false, 00:13:21.111 "copy": true, 00:13:21.111 "nvme_iov_md": false 00:13:21.111 }, 00:13:21.111 "memory_domains": [ 00:13:21.111 { 00:13:21.111 "dma_device_id": "system", 00:13:21.111 "dma_device_type": 1 00:13:21.111 }, 00:13:21.111 { 00:13:21.111 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:21.111 "dma_device_type": 2 00:13:21.111 } 00:13:21.111 ], 00:13:21.111 "driver_specific": {} 00:13:21.111 } 00:13:21.111 ] 00:13:21.111 21:40:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.111 21:40:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:21.111 21:40:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:21.111 21:40:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:21.111 21:40:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:13:21.111 21:40:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:21.111 21:40:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:21.111 21:40:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:21.111 21:40:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:21.111 21:40:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:21.111 21:40:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:21.111 21:40:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:21.111 21:40:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:21.111 21:40:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:21.111 21:40:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:21.111 21:40:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:21.111 21:40:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.111 21:40:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.111 21:40:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.111 21:40:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:21.111 "name": "Existed_Raid", 00:13:21.111 "uuid": "0ed9cde1-d63b-476f-9eca-5fa7838059bf", 00:13:21.111 "strip_size_kb": 0, 00:13:21.111 "state": "online", 00:13:21.111 "raid_level": "raid1", 00:13:21.111 "superblock": false, 00:13:21.111 "num_base_bdevs": 4, 00:13:21.111 "num_base_bdevs_discovered": 4, 00:13:21.111 "num_base_bdevs_operational": 4, 00:13:21.111 "base_bdevs_list": [ 00:13:21.111 { 00:13:21.111 "name": "BaseBdev1", 00:13:21.111 "uuid": "a60eaa4f-4781-409d-a2bd-6a38a7adaf87", 00:13:21.111 "is_configured": true, 00:13:21.111 "data_offset": 0, 00:13:21.111 "data_size": 65536 00:13:21.111 }, 00:13:21.111 { 00:13:21.111 "name": "BaseBdev2", 00:13:21.111 "uuid": "701282f1-92e9-4c5d-aa60-a21db7f9536e", 00:13:21.111 "is_configured": true, 00:13:21.111 "data_offset": 0, 00:13:21.111 "data_size": 65536 00:13:21.111 }, 00:13:21.111 { 00:13:21.111 "name": "BaseBdev3", 00:13:21.111 "uuid": "8643d8cd-3ff6-4e45-b481-e2aa586b2c76", 00:13:21.111 "is_configured": true, 00:13:21.111 "data_offset": 0, 00:13:21.111 "data_size": 65536 00:13:21.111 }, 00:13:21.111 { 00:13:21.111 "name": "BaseBdev4", 00:13:21.111 "uuid": "a8813a8f-4ef7-40c4-ae12-0be28a89395f", 00:13:21.111 "is_configured": true, 00:13:21.111 "data_offset": 0, 00:13:21.111 "data_size": 65536 00:13:21.111 } 00:13:21.111 ] 00:13:21.111 }' 00:13:21.111 21:40:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:21.111 21:40:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.370 21:40:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:13:21.370 21:40:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:21.370 21:40:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:21.370 21:40:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:21.370 21:40:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:21.370 21:40:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:21.629 21:40:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:21.629 21:40:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:21.629 21:40:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.629 21:40:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.629 [2024-12-10 21:40:22.161279] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:21.629 21:40:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.629 21:40:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:21.629 "name": "Existed_Raid", 00:13:21.629 "aliases": [ 00:13:21.629 "0ed9cde1-d63b-476f-9eca-5fa7838059bf" 00:13:21.629 ], 00:13:21.629 "product_name": "Raid Volume", 00:13:21.629 "block_size": 512, 00:13:21.629 "num_blocks": 65536, 00:13:21.629 "uuid": "0ed9cde1-d63b-476f-9eca-5fa7838059bf", 00:13:21.629 "assigned_rate_limits": { 00:13:21.629 "rw_ios_per_sec": 0, 00:13:21.629 "rw_mbytes_per_sec": 0, 00:13:21.629 "r_mbytes_per_sec": 0, 00:13:21.629 "w_mbytes_per_sec": 0 00:13:21.629 }, 00:13:21.629 "claimed": false, 00:13:21.629 "zoned": false, 00:13:21.629 "supported_io_types": { 00:13:21.629 "read": true, 00:13:21.629 "write": true, 00:13:21.629 "unmap": false, 00:13:21.629 "flush": false, 00:13:21.629 "reset": true, 00:13:21.629 "nvme_admin": false, 00:13:21.629 "nvme_io": false, 00:13:21.629 "nvme_io_md": false, 00:13:21.629 "write_zeroes": true, 00:13:21.629 "zcopy": false, 00:13:21.629 "get_zone_info": false, 00:13:21.629 "zone_management": false, 00:13:21.629 "zone_append": false, 00:13:21.629 "compare": false, 00:13:21.629 "compare_and_write": false, 00:13:21.629 "abort": false, 00:13:21.629 "seek_hole": false, 00:13:21.629 "seek_data": false, 00:13:21.629 "copy": false, 00:13:21.629 "nvme_iov_md": false 00:13:21.629 }, 00:13:21.629 "memory_domains": [ 00:13:21.629 { 00:13:21.629 "dma_device_id": "system", 00:13:21.629 "dma_device_type": 1 00:13:21.629 }, 00:13:21.629 { 00:13:21.629 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:21.629 "dma_device_type": 2 00:13:21.629 }, 00:13:21.629 { 00:13:21.629 "dma_device_id": "system", 00:13:21.629 "dma_device_type": 1 00:13:21.629 }, 00:13:21.629 { 00:13:21.629 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:21.629 "dma_device_type": 2 00:13:21.629 }, 00:13:21.629 { 00:13:21.629 "dma_device_id": "system", 00:13:21.629 "dma_device_type": 1 00:13:21.629 }, 00:13:21.629 { 00:13:21.629 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:21.629 "dma_device_type": 2 00:13:21.629 }, 00:13:21.629 { 00:13:21.629 "dma_device_id": "system", 00:13:21.629 "dma_device_type": 1 00:13:21.629 }, 00:13:21.629 { 00:13:21.629 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:21.629 "dma_device_type": 2 00:13:21.629 } 00:13:21.629 ], 00:13:21.629 "driver_specific": { 00:13:21.629 "raid": { 00:13:21.629 "uuid": "0ed9cde1-d63b-476f-9eca-5fa7838059bf", 00:13:21.629 "strip_size_kb": 0, 00:13:21.629 "state": "online", 00:13:21.629 "raid_level": "raid1", 00:13:21.629 "superblock": false, 00:13:21.629 "num_base_bdevs": 4, 00:13:21.629 "num_base_bdevs_discovered": 4, 00:13:21.629 "num_base_bdevs_operational": 4, 00:13:21.629 "base_bdevs_list": [ 00:13:21.629 { 00:13:21.629 "name": "BaseBdev1", 00:13:21.629 "uuid": "a60eaa4f-4781-409d-a2bd-6a38a7adaf87", 00:13:21.629 "is_configured": true, 00:13:21.629 "data_offset": 0, 00:13:21.629 "data_size": 65536 00:13:21.629 }, 00:13:21.629 { 00:13:21.629 "name": "BaseBdev2", 00:13:21.629 "uuid": "701282f1-92e9-4c5d-aa60-a21db7f9536e", 00:13:21.629 "is_configured": true, 00:13:21.629 "data_offset": 0, 00:13:21.629 "data_size": 65536 00:13:21.629 }, 00:13:21.629 { 00:13:21.629 "name": "BaseBdev3", 00:13:21.629 "uuid": "8643d8cd-3ff6-4e45-b481-e2aa586b2c76", 00:13:21.629 "is_configured": true, 00:13:21.629 "data_offset": 0, 00:13:21.629 "data_size": 65536 00:13:21.629 }, 00:13:21.629 { 00:13:21.629 "name": "BaseBdev4", 00:13:21.629 "uuid": "a8813a8f-4ef7-40c4-ae12-0be28a89395f", 00:13:21.629 "is_configured": true, 00:13:21.630 "data_offset": 0, 00:13:21.630 "data_size": 65536 00:13:21.630 } 00:13:21.630 ] 00:13:21.630 } 00:13:21.630 } 00:13:21.630 }' 00:13:21.630 21:40:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:21.630 21:40:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:13:21.630 BaseBdev2 00:13:21.630 BaseBdev3 00:13:21.630 BaseBdev4' 00:13:21.630 21:40:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:21.630 21:40:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:21.630 21:40:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:21.630 21:40:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:13:21.630 21:40:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.630 21:40:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.630 21:40:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:21.630 21:40:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.630 21:40:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:21.630 21:40:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:21.630 21:40:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:21.630 21:40:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:21.630 21:40:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.630 21:40:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:21.630 21:40:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.630 21:40:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.630 21:40:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:21.630 21:40:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:21.630 21:40:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:21.630 21:40:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:21.630 21:40:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.630 21:40:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.630 21:40:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:21.630 21:40:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.890 21:40:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:21.890 21:40:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:21.890 21:40:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:21.890 21:40:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:13:21.890 21:40:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:21.890 21:40:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.890 21:40:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.890 21:40:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.890 21:40:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:21.890 21:40:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:21.890 21:40:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:21.890 21:40:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.890 21:40:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.890 [2024-12-10 21:40:22.500455] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:21.890 21:40:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.890 21:40:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:13:21.890 21:40:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:13:21.890 21:40:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:21.890 21:40:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:13:21.890 21:40:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:13:21.890 21:40:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:13:21.890 21:40:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:21.890 21:40:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:21.890 21:40:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:21.890 21:40:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:21.890 21:40:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:21.890 21:40:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:21.890 21:40:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:21.890 21:40:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:21.890 21:40:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:21.890 21:40:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:21.890 21:40:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:21.890 21:40:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.890 21:40:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.890 21:40:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.890 21:40:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:21.890 "name": "Existed_Raid", 00:13:21.890 "uuid": "0ed9cde1-d63b-476f-9eca-5fa7838059bf", 00:13:21.890 "strip_size_kb": 0, 00:13:21.890 "state": "online", 00:13:21.890 "raid_level": "raid1", 00:13:21.890 "superblock": false, 00:13:21.890 "num_base_bdevs": 4, 00:13:21.890 "num_base_bdevs_discovered": 3, 00:13:21.890 "num_base_bdevs_operational": 3, 00:13:21.890 "base_bdevs_list": [ 00:13:21.890 { 00:13:21.890 "name": null, 00:13:21.890 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:21.890 "is_configured": false, 00:13:21.890 "data_offset": 0, 00:13:21.890 "data_size": 65536 00:13:21.890 }, 00:13:21.890 { 00:13:21.890 "name": "BaseBdev2", 00:13:21.890 "uuid": "701282f1-92e9-4c5d-aa60-a21db7f9536e", 00:13:21.890 "is_configured": true, 00:13:21.890 "data_offset": 0, 00:13:21.890 "data_size": 65536 00:13:21.890 }, 00:13:21.890 { 00:13:21.890 "name": "BaseBdev3", 00:13:21.890 "uuid": "8643d8cd-3ff6-4e45-b481-e2aa586b2c76", 00:13:21.890 "is_configured": true, 00:13:21.890 "data_offset": 0, 00:13:21.890 "data_size": 65536 00:13:21.890 }, 00:13:21.890 { 00:13:21.890 "name": "BaseBdev4", 00:13:21.890 "uuid": "a8813a8f-4ef7-40c4-ae12-0be28a89395f", 00:13:21.890 "is_configured": true, 00:13:21.890 "data_offset": 0, 00:13:21.890 "data_size": 65536 00:13:21.890 } 00:13:21.890 ] 00:13:21.890 }' 00:13:21.890 21:40:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:21.890 21:40:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.457 21:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:13:22.457 21:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:22.457 21:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:22.457 21:40:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.457 21:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:22.457 21:40:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.457 21:40:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.457 21:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:22.457 21:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:22.457 21:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:13:22.457 21:40:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.457 21:40:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.457 [2024-12-10 21:40:23.098915] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:22.457 21:40:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.457 21:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:22.457 21:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:22.457 21:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:22.457 21:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:22.457 21:40:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.457 21:40:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.458 21:40:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.717 21:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:22.717 21:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:22.717 21:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:13:22.717 21:40:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.717 21:40:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.717 [2024-12-10 21:40:23.258092] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:22.717 21:40:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.717 21:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:22.717 21:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:22.717 21:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:22.717 21:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:22.717 21:40:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.717 21:40:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.717 21:40:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.717 21:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:22.717 21:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:22.717 21:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:13:22.717 21:40:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.717 21:40:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.717 [2024-12-10 21:40:23.416059] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:13:22.717 [2024-12-10 21:40:23.416223] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:22.977 [2024-12-10 21:40:23.522128] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:22.977 [2024-12-10 21:40:23.522287] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:22.977 [2024-12-10 21:40:23.522331] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:13:22.977 21:40:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.977 21:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:22.977 21:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:22.977 21:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:22.977 21:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:13:22.977 21:40:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.977 21:40:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.977 21:40:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.977 21:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:13:22.977 21:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:13:22.977 21:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:13:22.977 21:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:13:22.977 21:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:22.977 21:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:22.977 21:40:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.977 21:40:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.977 BaseBdev2 00:13:22.977 21:40:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.977 21:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:13:22.977 21:40:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:13:22.977 21:40:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:22.977 21:40:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:22.977 21:40:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:22.977 21:40:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:22.977 21:40:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:22.977 21:40:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.977 21:40:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.977 21:40:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.977 21:40:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:22.977 21:40:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.977 21:40:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.977 [ 00:13:22.977 { 00:13:22.977 "name": "BaseBdev2", 00:13:22.977 "aliases": [ 00:13:22.977 "cd467af4-c088-49cc-a499-480c8d739314" 00:13:22.977 ], 00:13:22.977 "product_name": "Malloc disk", 00:13:22.977 "block_size": 512, 00:13:22.977 "num_blocks": 65536, 00:13:22.977 "uuid": "cd467af4-c088-49cc-a499-480c8d739314", 00:13:22.977 "assigned_rate_limits": { 00:13:22.977 "rw_ios_per_sec": 0, 00:13:22.977 "rw_mbytes_per_sec": 0, 00:13:22.977 "r_mbytes_per_sec": 0, 00:13:22.977 "w_mbytes_per_sec": 0 00:13:22.977 }, 00:13:22.977 "claimed": false, 00:13:22.977 "zoned": false, 00:13:22.977 "supported_io_types": { 00:13:22.977 "read": true, 00:13:22.977 "write": true, 00:13:22.977 "unmap": true, 00:13:22.977 "flush": true, 00:13:22.977 "reset": true, 00:13:22.977 "nvme_admin": false, 00:13:22.977 "nvme_io": false, 00:13:22.977 "nvme_io_md": false, 00:13:22.977 "write_zeroes": true, 00:13:22.977 "zcopy": true, 00:13:22.977 "get_zone_info": false, 00:13:22.977 "zone_management": false, 00:13:22.977 "zone_append": false, 00:13:22.977 "compare": false, 00:13:22.977 "compare_and_write": false, 00:13:22.977 "abort": true, 00:13:22.977 "seek_hole": false, 00:13:22.977 "seek_data": false, 00:13:22.977 "copy": true, 00:13:22.977 "nvme_iov_md": false 00:13:22.978 }, 00:13:22.978 "memory_domains": [ 00:13:22.978 { 00:13:22.978 "dma_device_id": "system", 00:13:22.978 "dma_device_type": 1 00:13:22.978 }, 00:13:22.978 { 00:13:22.978 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:22.978 "dma_device_type": 2 00:13:22.978 } 00:13:22.978 ], 00:13:22.978 "driver_specific": {} 00:13:22.978 } 00:13:22.978 ] 00:13:22.978 21:40:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.978 21:40:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:22.978 21:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:22.978 21:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:22.978 21:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:22.978 21:40:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.978 21:40:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.978 BaseBdev3 00:13:22.978 21:40:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.978 21:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:13:22.978 21:40:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:13:22.978 21:40:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:22.978 21:40:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:22.978 21:40:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:22.978 21:40:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:22.978 21:40:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:22.978 21:40:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.978 21:40:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.978 21:40:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.978 21:40:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:22.978 21:40:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.978 21:40:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.978 [ 00:13:22.978 { 00:13:22.978 "name": "BaseBdev3", 00:13:22.978 "aliases": [ 00:13:22.978 "e3e36c17-0c27-46e9-91e7-aab1b96fb995" 00:13:22.978 ], 00:13:22.978 "product_name": "Malloc disk", 00:13:22.978 "block_size": 512, 00:13:22.978 "num_blocks": 65536, 00:13:22.978 "uuid": "e3e36c17-0c27-46e9-91e7-aab1b96fb995", 00:13:22.978 "assigned_rate_limits": { 00:13:22.978 "rw_ios_per_sec": 0, 00:13:22.978 "rw_mbytes_per_sec": 0, 00:13:22.978 "r_mbytes_per_sec": 0, 00:13:22.978 "w_mbytes_per_sec": 0 00:13:22.978 }, 00:13:22.978 "claimed": false, 00:13:22.978 "zoned": false, 00:13:22.978 "supported_io_types": { 00:13:22.978 "read": true, 00:13:22.978 "write": true, 00:13:22.978 "unmap": true, 00:13:22.978 "flush": true, 00:13:22.978 "reset": true, 00:13:22.978 "nvme_admin": false, 00:13:22.978 "nvme_io": false, 00:13:22.978 "nvme_io_md": false, 00:13:22.978 "write_zeroes": true, 00:13:22.978 "zcopy": true, 00:13:22.978 "get_zone_info": false, 00:13:22.978 "zone_management": false, 00:13:22.978 "zone_append": false, 00:13:22.978 "compare": false, 00:13:22.978 "compare_and_write": false, 00:13:22.978 "abort": true, 00:13:22.978 "seek_hole": false, 00:13:22.978 "seek_data": false, 00:13:22.978 "copy": true, 00:13:22.978 "nvme_iov_md": false 00:13:22.978 }, 00:13:22.978 "memory_domains": [ 00:13:22.978 { 00:13:22.978 "dma_device_id": "system", 00:13:22.978 "dma_device_type": 1 00:13:22.978 }, 00:13:22.978 { 00:13:22.978 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:22.978 "dma_device_type": 2 00:13:22.978 } 00:13:22.978 ], 00:13:22.978 "driver_specific": {} 00:13:22.978 } 00:13:22.978 ] 00:13:22.978 21:40:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.978 21:40:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:22.978 21:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:22.978 21:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:22.978 21:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:13:22.978 21:40:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.978 21:40:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.238 BaseBdev4 00:13:23.238 21:40:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.238 21:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:13:23.238 21:40:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:13:23.238 21:40:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:23.238 21:40:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:23.238 21:40:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:23.238 21:40:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:23.238 21:40:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:23.238 21:40:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.238 21:40:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.238 21:40:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.238 21:40:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:13:23.238 21:40:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.238 21:40:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.238 [ 00:13:23.238 { 00:13:23.238 "name": "BaseBdev4", 00:13:23.238 "aliases": [ 00:13:23.238 "829fa262-f0ad-44d5-8b3a-05a61e2c3fb4" 00:13:23.238 ], 00:13:23.238 "product_name": "Malloc disk", 00:13:23.238 "block_size": 512, 00:13:23.238 "num_blocks": 65536, 00:13:23.238 "uuid": "829fa262-f0ad-44d5-8b3a-05a61e2c3fb4", 00:13:23.238 "assigned_rate_limits": { 00:13:23.238 "rw_ios_per_sec": 0, 00:13:23.238 "rw_mbytes_per_sec": 0, 00:13:23.238 "r_mbytes_per_sec": 0, 00:13:23.238 "w_mbytes_per_sec": 0 00:13:23.238 }, 00:13:23.238 "claimed": false, 00:13:23.238 "zoned": false, 00:13:23.238 "supported_io_types": { 00:13:23.238 "read": true, 00:13:23.238 "write": true, 00:13:23.238 "unmap": true, 00:13:23.238 "flush": true, 00:13:23.238 "reset": true, 00:13:23.238 "nvme_admin": false, 00:13:23.238 "nvme_io": false, 00:13:23.238 "nvme_io_md": false, 00:13:23.238 "write_zeroes": true, 00:13:23.238 "zcopy": true, 00:13:23.238 "get_zone_info": false, 00:13:23.238 "zone_management": false, 00:13:23.238 "zone_append": false, 00:13:23.238 "compare": false, 00:13:23.238 "compare_and_write": false, 00:13:23.238 "abort": true, 00:13:23.238 "seek_hole": false, 00:13:23.238 "seek_data": false, 00:13:23.238 "copy": true, 00:13:23.238 "nvme_iov_md": false 00:13:23.238 }, 00:13:23.238 "memory_domains": [ 00:13:23.238 { 00:13:23.238 "dma_device_id": "system", 00:13:23.238 "dma_device_type": 1 00:13:23.238 }, 00:13:23.238 { 00:13:23.238 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:23.238 "dma_device_type": 2 00:13:23.238 } 00:13:23.238 ], 00:13:23.238 "driver_specific": {} 00:13:23.238 } 00:13:23.238 ] 00:13:23.238 21:40:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.238 21:40:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:23.238 21:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:23.238 21:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:23.238 21:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:23.238 21:40:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.238 21:40:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.238 [2024-12-10 21:40:23.814127] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:23.239 [2024-12-10 21:40:23.814181] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:23.239 [2024-12-10 21:40:23.814208] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:23.239 [2024-12-10 21:40:23.816334] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:23.239 [2024-12-10 21:40:23.816395] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:23.239 21:40:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.239 21:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:23.239 21:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:23.239 21:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:23.239 21:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:23.239 21:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:23.239 21:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:23.239 21:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:23.239 21:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:23.239 21:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:23.239 21:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:23.239 21:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:23.239 21:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:23.239 21:40:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.239 21:40:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.239 21:40:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.239 21:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:23.239 "name": "Existed_Raid", 00:13:23.239 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:23.239 "strip_size_kb": 0, 00:13:23.239 "state": "configuring", 00:13:23.239 "raid_level": "raid1", 00:13:23.239 "superblock": false, 00:13:23.239 "num_base_bdevs": 4, 00:13:23.239 "num_base_bdevs_discovered": 3, 00:13:23.239 "num_base_bdevs_operational": 4, 00:13:23.239 "base_bdevs_list": [ 00:13:23.239 { 00:13:23.239 "name": "BaseBdev1", 00:13:23.239 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:23.239 "is_configured": false, 00:13:23.239 "data_offset": 0, 00:13:23.239 "data_size": 0 00:13:23.239 }, 00:13:23.239 { 00:13:23.239 "name": "BaseBdev2", 00:13:23.239 "uuid": "cd467af4-c088-49cc-a499-480c8d739314", 00:13:23.239 "is_configured": true, 00:13:23.239 "data_offset": 0, 00:13:23.239 "data_size": 65536 00:13:23.239 }, 00:13:23.239 { 00:13:23.239 "name": "BaseBdev3", 00:13:23.239 "uuid": "e3e36c17-0c27-46e9-91e7-aab1b96fb995", 00:13:23.239 "is_configured": true, 00:13:23.239 "data_offset": 0, 00:13:23.239 "data_size": 65536 00:13:23.239 }, 00:13:23.239 { 00:13:23.239 "name": "BaseBdev4", 00:13:23.239 "uuid": "829fa262-f0ad-44d5-8b3a-05a61e2c3fb4", 00:13:23.239 "is_configured": true, 00:13:23.239 "data_offset": 0, 00:13:23.239 "data_size": 65536 00:13:23.239 } 00:13:23.239 ] 00:13:23.239 }' 00:13:23.239 21:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:23.239 21:40:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.842 21:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:23.842 21:40:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.842 21:40:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.842 [2024-12-10 21:40:24.293332] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:23.842 21:40:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.842 21:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:23.842 21:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:23.842 21:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:23.842 21:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:23.842 21:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:23.842 21:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:23.842 21:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:23.842 21:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:23.842 21:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:23.842 21:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:23.842 21:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:23.842 21:40:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.842 21:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:23.842 21:40:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.842 21:40:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.842 21:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:23.842 "name": "Existed_Raid", 00:13:23.842 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:23.842 "strip_size_kb": 0, 00:13:23.842 "state": "configuring", 00:13:23.842 "raid_level": "raid1", 00:13:23.842 "superblock": false, 00:13:23.842 "num_base_bdevs": 4, 00:13:23.842 "num_base_bdevs_discovered": 2, 00:13:23.842 "num_base_bdevs_operational": 4, 00:13:23.842 "base_bdevs_list": [ 00:13:23.842 { 00:13:23.842 "name": "BaseBdev1", 00:13:23.842 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:23.842 "is_configured": false, 00:13:23.842 "data_offset": 0, 00:13:23.842 "data_size": 0 00:13:23.842 }, 00:13:23.842 { 00:13:23.842 "name": null, 00:13:23.842 "uuid": "cd467af4-c088-49cc-a499-480c8d739314", 00:13:23.842 "is_configured": false, 00:13:23.842 "data_offset": 0, 00:13:23.842 "data_size": 65536 00:13:23.842 }, 00:13:23.842 { 00:13:23.842 "name": "BaseBdev3", 00:13:23.842 "uuid": "e3e36c17-0c27-46e9-91e7-aab1b96fb995", 00:13:23.842 "is_configured": true, 00:13:23.842 "data_offset": 0, 00:13:23.842 "data_size": 65536 00:13:23.842 }, 00:13:23.842 { 00:13:23.842 "name": "BaseBdev4", 00:13:23.842 "uuid": "829fa262-f0ad-44d5-8b3a-05a61e2c3fb4", 00:13:23.842 "is_configured": true, 00:13:23.842 "data_offset": 0, 00:13:23.842 "data_size": 65536 00:13:23.842 } 00:13:23.842 ] 00:13:23.842 }' 00:13:23.842 21:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:23.842 21:40:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.103 21:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:24.103 21:40:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.103 21:40:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.103 21:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:24.103 21:40:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.103 21:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:13:24.103 21:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:24.103 21:40:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.103 21:40:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.103 [2024-12-10 21:40:24.848015] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:24.103 BaseBdev1 00:13:24.103 21:40:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.103 21:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:13:24.103 21:40:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:13:24.103 21:40:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:24.103 21:40:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:24.103 21:40:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:24.103 21:40:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:24.103 21:40:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:24.103 21:40:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.103 21:40:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.103 21:40:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.103 21:40:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:24.104 21:40:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.104 21:40:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.104 [ 00:13:24.104 { 00:13:24.104 "name": "BaseBdev1", 00:13:24.104 "aliases": [ 00:13:24.104 "73f66520-8f05-46e0-ba1a-b97ed6fc708c" 00:13:24.104 ], 00:13:24.104 "product_name": "Malloc disk", 00:13:24.104 "block_size": 512, 00:13:24.104 "num_blocks": 65536, 00:13:24.104 "uuid": "73f66520-8f05-46e0-ba1a-b97ed6fc708c", 00:13:24.104 "assigned_rate_limits": { 00:13:24.104 "rw_ios_per_sec": 0, 00:13:24.104 "rw_mbytes_per_sec": 0, 00:13:24.104 "r_mbytes_per_sec": 0, 00:13:24.104 "w_mbytes_per_sec": 0 00:13:24.104 }, 00:13:24.104 "claimed": true, 00:13:24.104 "claim_type": "exclusive_write", 00:13:24.104 "zoned": false, 00:13:24.104 "supported_io_types": { 00:13:24.104 "read": true, 00:13:24.104 "write": true, 00:13:24.104 "unmap": true, 00:13:24.104 "flush": true, 00:13:24.104 "reset": true, 00:13:24.104 "nvme_admin": false, 00:13:24.104 "nvme_io": false, 00:13:24.104 "nvme_io_md": false, 00:13:24.104 "write_zeroes": true, 00:13:24.104 "zcopy": true, 00:13:24.104 "get_zone_info": false, 00:13:24.104 "zone_management": false, 00:13:24.104 "zone_append": false, 00:13:24.104 "compare": false, 00:13:24.104 "compare_and_write": false, 00:13:24.104 "abort": true, 00:13:24.104 "seek_hole": false, 00:13:24.104 "seek_data": false, 00:13:24.104 "copy": true, 00:13:24.104 "nvme_iov_md": false 00:13:24.104 }, 00:13:24.104 "memory_domains": [ 00:13:24.104 { 00:13:24.104 "dma_device_id": "system", 00:13:24.104 "dma_device_type": 1 00:13:24.364 }, 00:13:24.364 { 00:13:24.364 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:24.364 "dma_device_type": 2 00:13:24.364 } 00:13:24.364 ], 00:13:24.364 "driver_specific": {} 00:13:24.364 } 00:13:24.364 ] 00:13:24.364 21:40:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.364 21:40:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:24.364 21:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:24.364 21:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:24.364 21:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:24.364 21:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:24.364 21:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:24.364 21:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:24.364 21:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:24.364 21:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:24.364 21:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:24.364 21:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:24.364 21:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:24.364 21:40:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.364 21:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:24.364 21:40:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.364 21:40:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.364 21:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:24.364 "name": "Existed_Raid", 00:13:24.364 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:24.364 "strip_size_kb": 0, 00:13:24.364 "state": "configuring", 00:13:24.364 "raid_level": "raid1", 00:13:24.364 "superblock": false, 00:13:24.364 "num_base_bdevs": 4, 00:13:24.364 "num_base_bdevs_discovered": 3, 00:13:24.364 "num_base_bdevs_operational": 4, 00:13:24.364 "base_bdevs_list": [ 00:13:24.364 { 00:13:24.364 "name": "BaseBdev1", 00:13:24.364 "uuid": "73f66520-8f05-46e0-ba1a-b97ed6fc708c", 00:13:24.364 "is_configured": true, 00:13:24.364 "data_offset": 0, 00:13:24.364 "data_size": 65536 00:13:24.364 }, 00:13:24.364 { 00:13:24.364 "name": null, 00:13:24.364 "uuid": "cd467af4-c088-49cc-a499-480c8d739314", 00:13:24.364 "is_configured": false, 00:13:24.364 "data_offset": 0, 00:13:24.364 "data_size": 65536 00:13:24.364 }, 00:13:24.364 { 00:13:24.364 "name": "BaseBdev3", 00:13:24.364 "uuid": "e3e36c17-0c27-46e9-91e7-aab1b96fb995", 00:13:24.364 "is_configured": true, 00:13:24.364 "data_offset": 0, 00:13:24.364 "data_size": 65536 00:13:24.364 }, 00:13:24.364 { 00:13:24.364 "name": "BaseBdev4", 00:13:24.364 "uuid": "829fa262-f0ad-44d5-8b3a-05a61e2c3fb4", 00:13:24.364 "is_configured": true, 00:13:24.364 "data_offset": 0, 00:13:24.364 "data_size": 65536 00:13:24.364 } 00:13:24.364 ] 00:13:24.364 }' 00:13:24.364 21:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:24.364 21:40:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.623 21:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:24.623 21:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:24.623 21:40:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.623 21:40:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.623 21:40:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.623 21:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:13:24.623 21:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:13:24.623 21:40:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.623 21:40:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.623 [2024-12-10 21:40:25.359364] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:24.623 21:40:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.623 21:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:24.623 21:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:24.623 21:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:24.623 21:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:24.623 21:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:24.623 21:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:24.623 21:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:24.623 21:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:24.623 21:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:24.623 21:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:24.623 21:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:24.623 21:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:24.623 21:40:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.623 21:40:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.623 21:40:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.883 21:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:24.883 "name": "Existed_Raid", 00:13:24.883 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:24.883 "strip_size_kb": 0, 00:13:24.883 "state": "configuring", 00:13:24.883 "raid_level": "raid1", 00:13:24.883 "superblock": false, 00:13:24.883 "num_base_bdevs": 4, 00:13:24.883 "num_base_bdevs_discovered": 2, 00:13:24.883 "num_base_bdevs_operational": 4, 00:13:24.883 "base_bdevs_list": [ 00:13:24.883 { 00:13:24.883 "name": "BaseBdev1", 00:13:24.883 "uuid": "73f66520-8f05-46e0-ba1a-b97ed6fc708c", 00:13:24.883 "is_configured": true, 00:13:24.883 "data_offset": 0, 00:13:24.883 "data_size": 65536 00:13:24.883 }, 00:13:24.883 { 00:13:24.883 "name": null, 00:13:24.883 "uuid": "cd467af4-c088-49cc-a499-480c8d739314", 00:13:24.883 "is_configured": false, 00:13:24.883 "data_offset": 0, 00:13:24.883 "data_size": 65536 00:13:24.883 }, 00:13:24.883 { 00:13:24.883 "name": null, 00:13:24.883 "uuid": "e3e36c17-0c27-46e9-91e7-aab1b96fb995", 00:13:24.883 "is_configured": false, 00:13:24.883 "data_offset": 0, 00:13:24.883 "data_size": 65536 00:13:24.883 }, 00:13:24.883 { 00:13:24.883 "name": "BaseBdev4", 00:13:24.883 "uuid": "829fa262-f0ad-44d5-8b3a-05a61e2c3fb4", 00:13:24.883 "is_configured": true, 00:13:24.883 "data_offset": 0, 00:13:24.883 "data_size": 65536 00:13:24.883 } 00:13:24.883 ] 00:13:24.883 }' 00:13:24.883 21:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:24.883 21:40:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.143 21:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:25.143 21:40:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.143 21:40:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.143 21:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:25.143 21:40:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.143 21:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:13:25.143 21:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:13:25.143 21:40:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.143 21:40:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.143 [2024-12-10 21:40:25.874435] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:25.143 21:40:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.143 21:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:25.143 21:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:25.143 21:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:25.143 21:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:25.143 21:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:25.143 21:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:25.143 21:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:25.143 21:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:25.143 21:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:25.143 21:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:25.143 21:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:25.143 21:40:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.143 21:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:25.143 21:40:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.143 21:40:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.403 21:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:25.404 "name": "Existed_Raid", 00:13:25.404 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:25.404 "strip_size_kb": 0, 00:13:25.404 "state": "configuring", 00:13:25.404 "raid_level": "raid1", 00:13:25.404 "superblock": false, 00:13:25.404 "num_base_bdevs": 4, 00:13:25.404 "num_base_bdevs_discovered": 3, 00:13:25.404 "num_base_bdevs_operational": 4, 00:13:25.404 "base_bdevs_list": [ 00:13:25.404 { 00:13:25.404 "name": "BaseBdev1", 00:13:25.404 "uuid": "73f66520-8f05-46e0-ba1a-b97ed6fc708c", 00:13:25.404 "is_configured": true, 00:13:25.404 "data_offset": 0, 00:13:25.404 "data_size": 65536 00:13:25.404 }, 00:13:25.404 { 00:13:25.404 "name": null, 00:13:25.404 "uuid": "cd467af4-c088-49cc-a499-480c8d739314", 00:13:25.404 "is_configured": false, 00:13:25.404 "data_offset": 0, 00:13:25.404 "data_size": 65536 00:13:25.404 }, 00:13:25.404 { 00:13:25.404 "name": "BaseBdev3", 00:13:25.404 "uuid": "e3e36c17-0c27-46e9-91e7-aab1b96fb995", 00:13:25.404 "is_configured": true, 00:13:25.404 "data_offset": 0, 00:13:25.404 "data_size": 65536 00:13:25.404 }, 00:13:25.404 { 00:13:25.404 "name": "BaseBdev4", 00:13:25.404 "uuid": "829fa262-f0ad-44d5-8b3a-05a61e2c3fb4", 00:13:25.404 "is_configured": true, 00:13:25.404 "data_offset": 0, 00:13:25.404 "data_size": 65536 00:13:25.404 } 00:13:25.404 ] 00:13:25.404 }' 00:13:25.404 21:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:25.404 21:40:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.663 21:40:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:25.663 21:40:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:25.663 21:40:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.663 21:40:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.663 21:40:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.663 21:40:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:13:25.663 21:40:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:25.663 21:40:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.663 21:40:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.663 [2024-12-10 21:40:26.393656] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:25.923 21:40:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.923 21:40:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:25.923 21:40:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:25.923 21:40:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:25.923 21:40:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:25.923 21:40:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:25.923 21:40:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:25.923 21:40:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:25.923 21:40:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:25.923 21:40:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:25.923 21:40:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:25.923 21:40:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:25.923 21:40:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:25.923 21:40:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.923 21:40:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.923 21:40:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.923 21:40:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:25.923 "name": "Existed_Raid", 00:13:25.923 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:25.923 "strip_size_kb": 0, 00:13:25.923 "state": "configuring", 00:13:25.923 "raid_level": "raid1", 00:13:25.923 "superblock": false, 00:13:25.923 "num_base_bdevs": 4, 00:13:25.923 "num_base_bdevs_discovered": 2, 00:13:25.923 "num_base_bdevs_operational": 4, 00:13:25.923 "base_bdevs_list": [ 00:13:25.923 { 00:13:25.923 "name": null, 00:13:25.923 "uuid": "73f66520-8f05-46e0-ba1a-b97ed6fc708c", 00:13:25.923 "is_configured": false, 00:13:25.923 "data_offset": 0, 00:13:25.923 "data_size": 65536 00:13:25.923 }, 00:13:25.923 { 00:13:25.923 "name": null, 00:13:25.923 "uuid": "cd467af4-c088-49cc-a499-480c8d739314", 00:13:25.923 "is_configured": false, 00:13:25.923 "data_offset": 0, 00:13:25.923 "data_size": 65536 00:13:25.923 }, 00:13:25.923 { 00:13:25.923 "name": "BaseBdev3", 00:13:25.923 "uuid": "e3e36c17-0c27-46e9-91e7-aab1b96fb995", 00:13:25.923 "is_configured": true, 00:13:25.923 "data_offset": 0, 00:13:25.923 "data_size": 65536 00:13:25.923 }, 00:13:25.923 { 00:13:25.923 "name": "BaseBdev4", 00:13:25.923 "uuid": "829fa262-f0ad-44d5-8b3a-05a61e2c3fb4", 00:13:25.923 "is_configured": true, 00:13:25.923 "data_offset": 0, 00:13:25.923 "data_size": 65536 00:13:25.923 } 00:13:25.923 ] 00:13:25.923 }' 00:13:25.923 21:40:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:25.923 21:40:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.494 21:40:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:26.494 21:40:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:26.494 21:40:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.494 21:40:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.494 21:40:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.494 21:40:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:13:26.494 21:40:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:13:26.494 21:40:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.494 21:40:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.494 [2024-12-10 21:40:27.028217] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:26.494 21:40:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.494 21:40:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:26.494 21:40:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:26.494 21:40:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:26.494 21:40:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:26.494 21:40:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:26.494 21:40:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:26.494 21:40:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:26.494 21:40:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:26.494 21:40:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:26.494 21:40:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:26.494 21:40:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:26.494 21:40:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.494 21:40:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.494 21:40:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:26.494 21:40:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.494 21:40:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:26.494 "name": "Existed_Raid", 00:13:26.494 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:26.494 "strip_size_kb": 0, 00:13:26.494 "state": "configuring", 00:13:26.494 "raid_level": "raid1", 00:13:26.494 "superblock": false, 00:13:26.494 "num_base_bdevs": 4, 00:13:26.494 "num_base_bdevs_discovered": 3, 00:13:26.494 "num_base_bdevs_operational": 4, 00:13:26.494 "base_bdevs_list": [ 00:13:26.494 { 00:13:26.494 "name": null, 00:13:26.494 "uuid": "73f66520-8f05-46e0-ba1a-b97ed6fc708c", 00:13:26.494 "is_configured": false, 00:13:26.494 "data_offset": 0, 00:13:26.494 "data_size": 65536 00:13:26.494 }, 00:13:26.494 { 00:13:26.494 "name": "BaseBdev2", 00:13:26.494 "uuid": "cd467af4-c088-49cc-a499-480c8d739314", 00:13:26.494 "is_configured": true, 00:13:26.494 "data_offset": 0, 00:13:26.494 "data_size": 65536 00:13:26.494 }, 00:13:26.494 { 00:13:26.494 "name": "BaseBdev3", 00:13:26.494 "uuid": "e3e36c17-0c27-46e9-91e7-aab1b96fb995", 00:13:26.494 "is_configured": true, 00:13:26.494 "data_offset": 0, 00:13:26.494 "data_size": 65536 00:13:26.494 }, 00:13:26.494 { 00:13:26.494 "name": "BaseBdev4", 00:13:26.494 "uuid": "829fa262-f0ad-44d5-8b3a-05a61e2c3fb4", 00:13:26.494 "is_configured": true, 00:13:26.494 "data_offset": 0, 00:13:26.494 "data_size": 65536 00:13:26.494 } 00:13:26.494 ] 00:13:26.494 }' 00:13:26.494 21:40:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:26.494 21:40:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.753 21:40:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:26.753 21:40:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:26.753 21:40:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.753 21:40:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:27.012 21:40:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.012 21:40:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:13:27.012 21:40:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:27.012 21:40:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:13:27.012 21:40:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.012 21:40:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:27.012 21:40:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.012 21:40:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 73f66520-8f05-46e0-ba1a-b97ed6fc708c 00:13:27.012 21:40:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.012 21:40:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:27.012 [2024-12-10 21:40:27.640703] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:13:27.012 [2024-12-10 21:40:27.640773] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:13:27.012 [2024-12-10 21:40:27.640784] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:13:27.012 [2024-12-10 21:40:27.641054] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:13:27.012 [2024-12-10 21:40:27.641248] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:13:27.012 [2024-12-10 21:40:27.641280] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:13:27.012 NewBaseBdev 00:13:27.012 [2024-12-10 21:40:27.641575] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:27.013 21:40:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.013 21:40:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:13:27.013 21:40:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:13:27.013 21:40:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:27.013 21:40:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:27.013 21:40:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:27.013 21:40:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:27.013 21:40:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:27.013 21:40:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.013 21:40:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:27.013 21:40:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.013 21:40:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:13:27.013 21:40:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.013 21:40:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:27.013 [ 00:13:27.013 { 00:13:27.013 "name": "NewBaseBdev", 00:13:27.013 "aliases": [ 00:13:27.013 "73f66520-8f05-46e0-ba1a-b97ed6fc708c" 00:13:27.013 ], 00:13:27.013 "product_name": "Malloc disk", 00:13:27.013 "block_size": 512, 00:13:27.013 "num_blocks": 65536, 00:13:27.013 "uuid": "73f66520-8f05-46e0-ba1a-b97ed6fc708c", 00:13:27.013 "assigned_rate_limits": { 00:13:27.013 "rw_ios_per_sec": 0, 00:13:27.013 "rw_mbytes_per_sec": 0, 00:13:27.013 "r_mbytes_per_sec": 0, 00:13:27.013 "w_mbytes_per_sec": 0 00:13:27.013 }, 00:13:27.013 "claimed": true, 00:13:27.013 "claim_type": "exclusive_write", 00:13:27.013 "zoned": false, 00:13:27.013 "supported_io_types": { 00:13:27.013 "read": true, 00:13:27.013 "write": true, 00:13:27.013 "unmap": true, 00:13:27.013 "flush": true, 00:13:27.013 "reset": true, 00:13:27.013 "nvme_admin": false, 00:13:27.013 "nvme_io": false, 00:13:27.013 "nvme_io_md": false, 00:13:27.013 "write_zeroes": true, 00:13:27.013 "zcopy": true, 00:13:27.013 "get_zone_info": false, 00:13:27.013 "zone_management": false, 00:13:27.013 "zone_append": false, 00:13:27.013 "compare": false, 00:13:27.013 "compare_and_write": false, 00:13:27.013 "abort": true, 00:13:27.013 "seek_hole": false, 00:13:27.013 "seek_data": false, 00:13:27.013 "copy": true, 00:13:27.013 "nvme_iov_md": false 00:13:27.013 }, 00:13:27.013 "memory_domains": [ 00:13:27.013 { 00:13:27.013 "dma_device_id": "system", 00:13:27.013 "dma_device_type": 1 00:13:27.013 }, 00:13:27.013 { 00:13:27.013 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:27.013 "dma_device_type": 2 00:13:27.013 } 00:13:27.013 ], 00:13:27.013 "driver_specific": {} 00:13:27.013 } 00:13:27.013 ] 00:13:27.013 21:40:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.013 21:40:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:27.013 21:40:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:13:27.013 21:40:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:27.013 21:40:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:27.013 21:40:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:27.013 21:40:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:27.013 21:40:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:27.013 21:40:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:27.013 21:40:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:27.013 21:40:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:27.013 21:40:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:27.013 21:40:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:27.013 21:40:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.013 21:40:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:27.013 21:40:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:27.013 21:40:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.013 21:40:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:27.013 "name": "Existed_Raid", 00:13:27.013 "uuid": "e123368f-e502-4956-bf11-f823c4db2b6b", 00:13:27.013 "strip_size_kb": 0, 00:13:27.013 "state": "online", 00:13:27.013 "raid_level": "raid1", 00:13:27.013 "superblock": false, 00:13:27.013 "num_base_bdevs": 4, 00:13:27.013 "num_base_bdevs_discovered": 4, 00:13:27.013 "num_base_bdevs_operational": 4, 00:13:27.013 "base_bdevs_list": [ 00:13:27.013 { 00:13:27.013 "name": "NewBaseBdev", 00:13:27.013 "uuid": "73f66520-8f05-46e0-ba1a-b97ed6fc708c", 00:13:27.013 "is_configured": true, 00:13:27.013 "data_offset": 0, 00:13:27.013 "data_size": 65536 00:13:27.013 }, 00:13:27.013 { 00:13:27.013 "name": "BaseBdev2", 00:13:27.013 "uuid": "cd467af4-c088-49cc-a499-480c8d739314", 00:13:27.013 "is_configured": true, 00:13:27.013 "data_offset": 0, 00:13:27.013 "data_size": 65536 00:13:27.013 }, 00:13:27.013 { 00:13:27.013 "name": "BaseBdev3", 00:13:27.013 "uuid": "e3e36c17-0c27-46e9-91e7-aab1b96fb995", 00:13:27.013 "is_configured": true, 00:13:27.013 "data_offset": 0, 00:13:27.013 "data_size": 65536 00:13:27.013 }, 00:13:27.013 { 00:13:27.013 "name": "BaseBdev4", 00:13:27.013 "uuid": "829fa262-f0ad-44d5-8b3a-05a61e2c3fb4", 00:13:27.013 "is_configured": true, 00:13:27.013 "data_offset": 0, 00:13:27.013 "data_size": 65536 00:13:27.013 } 00:13:27.013 ] 00:13:27.013 }' 00:13:27.013 21:40:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:27.013 21:40:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:27.581 21:40:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:13:27.581 21:40:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:27.581 21:40:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:27.581 21:40:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:27.581 21:40:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:27.581 21:40:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:27.581 21:40:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:27.581 21:40:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:27.581 21:40:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.581 21:40:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:27.581 [2024-12-10 21:40:28.144240] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:27.581 21:40:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.581 21:40:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:27.581 "name": "Existed_Raid", 00:13:27.581 "aliases": [ 00:13:27.581 "e123368f-e502-4956-bf11-f823c4db2b6b" 00:13:27.581 ], 00:13:27.581 "product_name": "Raid Volume", 00:13:27.581 "block_size": 512, 00:13:27.581 "num_blocks": 65536, 00:13:27.581 "uuid": "e123368f-e502-4956-bf11-f823c4db2b6b", 00:13:27.581 "assigned_rate_limits": { 00:13:27.581 "rw_ios_per_sec": 0, 00:13:27.581 "rw_mbytes_per_sec": 0, 00:13:27.582 "r_mbytes_per_sec": 0, 00:13:27.582 "w_mbytes_per_sec": 0 00:13:27.582 }, 00:13:27.582 "claimed": false, 00:13:27.582 "zoned": false, 00:13:27.582 "supported_io_types": { 00:13:27.582 "read": true, 00:13:27.582 "write": true, 00:13:27.582 "unmap": false, 00:13:27.582 "flush": false, 00:13:27.582 "reset": true, 00:13:27.582 "nvme_admin": false, 00:13:27.582 "nvme_io": false, 00:13:27.582 "nvme_io_md": false, 00:13:27.582 "write_zeroes": true, 00:13:27.582 "zcopy": false, 00:13:27.582 "get_zone_info": false, 00:13:27.582 "zone_management": false, 00:13:27.582 "zone_append": false, 00:13:27.582 "compare": false, 00:13:27.582 "compare_and_write": false, 00:13:27.582 "abort": false, 00:13:27.582 "seek_hole": false, 00:13:27.582 "seek_data": false, 00:13:27.582 "copy": false, 00:13:27.582 "nvme_iov_md": false 00:13:27.582 }, 00:13:27.582 "memory_domains": [ 00:13:27.582 { 00:13:27.582 "dma_device_id": "system", 00:13:27.582 "dma_device_type": 1 00:13:27.582 }, 00:13:27.582 { 00:13:27.582 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:27.582 "dma_device_type": 2 00:13:27.582 }, 00:13:27.582 { 00:13:27.582 "dma_device_id": "system", 00:13:27.582 "dma_device_type": 1 00:13:27.582 }, 00:13:27.582 { 00:13:27.582 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:27.582 "dma_device_type": 2 00:13:27.582 }, 00:13:27.582 { 00:13:27.582 "dma_device_id": "system", 00:13:27.582 "dma_device_type": 1 00:13:27.582 }, 00:13:27.582 { 00:13:27.582 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:27.582 "dma_device_type": 2 00:13:27.582 }, 00:13:27.582 { 00:13:27.582 "dma_device_id": "system", 00:13:27.582 "dma_device_type": 1 00:13:27.582 }, 00:13:27.582 { 00:13:27.582 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:27.582 "dma_device_type": 2 00:13:27.582 } 00:13:27.582 ], 00:13:27.582 "driver_specific": { 00:13:27.582 "raid": { 00:13:27.582 "uuid": "e123368f-e502-4956-bf11-f823c4db2b6b", 00:13:27.582 "strip_size_kb": 0, 00:13:27.582 "state": "online", 00:13:27.582 "raid_level": "raid1", 00:13:27.582 "superblock": false, 00:13:27.582 "num_base_bdevs": 4, 00:13:27.582 "num_base_bdevs_discovered": 4, 00:13:27.582 "num_base_bdevs_operational": 4, 00:13:27.582 "base_bdevs_list": [ 00:13:27.582 { 00:13:27.582 "name": "NewBaseBdev", 00:13:27.582 "uuid": "73f66520-8f05-46e0-ba1a-b97ed6fc708c", 00:13:27.582 "is_configured": true, 00:13:27.582 "data_offset": 0, 00:13:27.582 "data_size": 65536 00:13:27.582 }, 00:13:27.582 { 00:13:27.582 "name": "BaseBdev2", 00:13:27.582 "uuid": "cd467af4-c088-49cc-a499-480c8d739314", 00:13:27.582 "is_configured": true, 00:13:27.582 "data_offset": 0, 00:13:27.582 "data_size": 65536 00:13:27.582 }, 00:13:27.582 { 00:13:27.582 "name": "BaseBdev3", 00:13:27.582 "uuid": "e3e36c17-0c27-46e9-91e7-aab1b96fb995", 00:13:27.582 "is_configured": true, 00:13:27.582 "data_offset": 0, 00:13:27.582 "data_size": 65536 00:13:27.582 }, 00:13:27.582 { 00:13:27.582 "name": "BaseBdev4", 00:13:27.582 "uuid": "829fa262-f0ad-44d5-8b3a-05a61e2c3fb4", 00:13:27.582 "is_configured": true, 00:13:27.582 "data_offset": 0, 00:13:27.582 "data_size": 65536 00:13:27.582 } 00:13:27.582 ] 00:13:27.582 } 00:13:27.582 } 00:13:27.582 }' 00:13:27.582 21:40:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:27.582 21:40:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:13:27.582 BaseBdev2 00:13:27.582 BaseBdev3 00:13:27.582 BaseBdev4' 00:13:27.582 21:40:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:27.582 21:40:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:27.582 21:40:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:27.582 21:40:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:27.582 21:40:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:13:27.582 21:40:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.582 21:40:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:27.582 21:40:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.582 21:40:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:27.582 21:40:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:27.582 21:40:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:27.582 21:40:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:27.582 21:40:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.582 21:40:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:27.582 21:40:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:27.582 21:40:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.582 21:40:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:27.582 21:40:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:27.582 21:40:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:27.582 21:40:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:27.582 21:40:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:27.582 21:40:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.582 21:40:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:27.841 21:40:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.841 21:40:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:27.841 21:40:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:27.842 21:40:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:27.842 21:40:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:27.842 21:40:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:13:27.842 21:40:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.842 21:40:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:27.842 21:40:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.842 21:40:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:27.842 21:40:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:27.842 21:40:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:27.842 21:40:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.842 21:40:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:27.842 [2024-12-10 21:40:28.431450] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:27.842 [2024-12-10 21:40:28.431481] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:27.842 [2024-12-10 21:40:28.431565] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:27.842 [2024-12-10 21:40:28.431909] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:27.842 [2024-12-10 21:40:28.431938] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:13:27.842 21:40:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.842 21:40:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 73333 00:13:27.842 21:40:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 73333 ']' 00:13:27.842 21:40:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 73333 00:13:27.842 21:40:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:13:27.842 21:40:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:27.842 21:40:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73333 00:13:27.842 killing process with pid 73333 00:13:27.842 21:40:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:27.842 21:40:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:27.842 21:40:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73333' 00:13:27.842 21:40:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 73333 00:13:27.842 [2024-12-10 21:40:28.482207] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:27.842 21:40:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 73333 00:13:28.449 [2024-12-10 21:40:28.896756] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:29.401 21:40:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:13:29.401 00:13:29.401 real 0m12.054s 00:13:29.401 user 0m19.208s 00:13:29.401 sys 0m2.156s 00:13:29.401 21:40:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:29.401 21:40:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.401 ************************************ 00:13:29.401 END TEST raid_state_function_test 00:13:29.401 ************************************ 00:13:29.401 21:40:30 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 4 true 00:13:29.401 21:40:30 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:13:29.401 21:40:30 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:29.401 21:40:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:29.401 ************************************ 00:13:29.401 START TEST raid_state_function_test_sb 00:13:29.401 ************************************ 00:13:29.401 21:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 4 true 00:13:29.401 21:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:13:29.401 21:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:13:29.401 21:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:13:29.401 21:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:13:29.401 21:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:13:29.401 21:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:29.401 21:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:13:29.401 21:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:29.401 21:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:29.401 21:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:13:29.401 21:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:29.401 21:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:29.401 21:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:13:29.401 21:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:29.401 21:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:29.401 21:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:13:29.401 21:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:29.401 21:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:29.401 21:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:29.401 21:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:13:29.401 21:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:13:29.401 21:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:13:29.401 21:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:13:29.401 21:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:13:29.401 21:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:13:29.401 21:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:13:29.401 21:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:13:29.401 21:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:13:29.401 21:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=74010 00:13:29.401 21:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:13:29.402 21:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 74010' 00:13:29.402 Process raid pid: 74010 00:13:29.402 21:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 74010 00:13:29.402 21:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 74010 ']' 00:13:29.402 21:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:29.402 21:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:29.402 21:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:29.402 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:29.402 21:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:29.402 21:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.660 [2024-12-10 21:40:30.225278] Starting SPDK v25.01-pre git sha1 cec5ba284 / DPDK 24.03.0 initialization... 00:13:29.660 [2024-12-10 21:40:30.225405] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:29.660 [2024-12-10 21:40:30.394038] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:29.918 [2024-12-10 21:40:30.511426] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:13:30.177 [2024-12-10 21:40:30.711873] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:30.178 [2024-12-10 21:40:30.711939] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:30.437 21:40:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:30.437 21:40:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:13:30.437 21:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:30.437 21:40:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.437 21:40:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.437 [2024-12-10 21:40:31.066780] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:30.437 [2024-12-10 21:40:31.066835] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:30.437 [2024-12-10 21:40:31.066845] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:30.437 [2024-12-10 21:40:31.066870] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:30.437 [2024-12-10 21:40:31.066877] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:30.437 [2024-12-10 21:40:31.066886] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:30.437 [2024-12-10 21:40:31.066892] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:30.437 [2024-12-10 21:40:31.066901] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:30.437 21:40:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.437 21:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:30.437 21:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:30.437 21:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:30.437 21:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:30.437 21:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:30.437 21:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:30.437 21:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:30.437 21:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:30.437 21:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:30.438 21:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:30.438 21:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:30.438 21:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:30.438 21:40:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.438 21:40:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.438 21:40:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.438 21:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:30.438 "name": "Existed_Raid", 00:13:30.438 "uuid": "bbdeceb7-0884-4741-98e6-1eec34be4678", 00:13:30.438 "strip_size_kb": 0, 00:13:30.438 "state": "configuring", 00:13:30.438 "raid_level": "raid1", 00:13:30.438 "superblock": true, 00:13:30.438 "num_base_bdevs": 4, 00:13:30.438 "num_base_bdevs_discovered": 0, 00:13:30.438 "num_base_bdevs_operational": 4, 00:13:30.438 "base_bdevs_list": [ 00:13:30.438 { 00:13:30.438 "name": "BaseBdev1", 00:13:30.438 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:30.438 "is_configured": false, 00:13:30.438 "data_offset": 0, 00:13:30.438 "data_size": 0 00:13:30.438 }, 00:13:30.438 { 00:13:30.438 "name": "BaseBdev2", 00:13:30.438 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:30.438 "is_configured": false, 00:13:30.438 "data_offset": 0, 00:13:30.438 "data_size": 0 00:13:30.438 }, 00:13:30.438 { 00:13:30.438 "name": "BaseBdev3", 00:13:30.438 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:30.438 "is_configured": false, 00:13:30.438 "data_offset": 0, 00:13:30.438 "data_size": 0 00:13:30.438 }, 00:13:30.438 { 00:13:30.438 "name": "BaseBdev4", 00:13:30.438 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:30.438 "is_configured": false, 00:13:30.438 "data_offset": 0, 00:13:30.438 "data_size": 0 00:13:30.438 } 00:13:30.438 ] 00:13:30.438 }' 00:13:30.438 21:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:30.438 21:40:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.007 21:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:31.007 21:40:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.007 21:40:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.007 [2024-12-10 21:40:31.537914] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:31.007 [2024-12-10 21:40:31.537963] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:13:31.007 21:40:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.007 21:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:31.007 21:40:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.007 21:40:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.007 [2024-12-10 21:40:31.545902] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:31.007 [2024-12-10 21:40:31.545951] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:31.007 [2024-12-10 21:40:31.545962] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:31.007 [2024-12-10 21:40:31.545972] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:31.007 [2024-12-10 21:40:31.545979] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:31.007 [2024-12-10 21:40:31.545989] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:31.007 [2024-12-10 21:40:31.545997] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:31.007 [2024-12-10 21:40:31.546006] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:31.007 21:40:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.007 21:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:31.007 21:40:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.007 21:40:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.007 [2024-12-10 21:40:31.590752] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:31.007 BaseBdev1 00:13:31.007 21:40:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.007 21:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:13:31.007 21:40:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:13:31.007 21:40:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:31.007 21:40:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:31.007 21:40:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:31.007 21:40:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:31.007 21:40:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:31.007 21:40:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.007 21:40:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.007 21:40:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.007 21:40:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:31.007 21:40:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.007 21:40:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.007 [ 00:13:31.007 { 00:13:31.007 "name": "BaseBdev1", 00:13:31.007 "aliases": [ 00:13:31.007 "06db1c90-8cfa-4b29-a78d-e7ad4f592213" 00:13:31.007 ], 00:13:31.007 "product_name": "Malloc disk", 00:13:31.007 "block_size": 512, 00:13:31.007 "num_blocks": 65536, 00:13:31.007 "uuid": "06db1c90-8cfa-4b29-a78d-e7ad4f592213", 00:13:31.007 "assigned_rate_limits": { 00:13:31.007 "rw_ios_per_sec": 0, 00:13:31.007 "rw_mbytes_per_sec": 0, 00:13:31.007 "r_mbytes_per_sec": 0, 00:13:31.007 "w_mbytes_per_sec": 0 00:13:31.007 }, 00:13:31.007 "claimed": true, 00:13:31.007 "claim_type": "exclusive_write", 00:13:31.007 "zoned": false, 00:13:31.007 "supported_io_types": { 00:13:31.007 "read": true, 00:13:31.007 "write": true, 00:13:31.007 "unmap": true, 00:13:31.007 "flush": true, 00:13:31.007 "reset": true, 00:13:31.007 "nvme_admin": false, 00:13:31.007 "nvme_io": false, 00:13:31.007 "nvme_io_md": false, 00:13:31.007 "write_zeroes": true, 00:13:31.007 "zcopy": true, 00:13:31.007 "get_zone_info": false, 00:13:31.007 "zone_management": false, 00:13:31.007 "zone_append": false, 00:13:31.007 "compare": false, 00:13:31.007 "compare_and_write": false, 00:13:31.007 "abort": true, 00:13:31.007 "seek_hole": false, 00:13:31.007 "seek_data": false, 00:13:31.007 "copy": true, 00:13:31.007 "nvme_iov_md": false 00:13:31.007 }, 00:13:31.007 "memory_domains": [ 00:13:31.007 { 00:13:31.007 "dma_device_id": "system", 00:13:31.007 "dma_device_type": 1 00:13:31.007 }, 00:13:31.007 { 00:13:31.007 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:31.007 "dma_device_type": 2 00:13:31.007 } 00:13:31.007 ], 00:13:31.007 "driver_specific": {} 00:13:31.007 } 00:13:31.007 ] 00:13:31.007 21:40:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.007 21:40:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:31.007 21:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:31.007 21:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:31.007 21:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:31.007 21:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:31.007 21:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:31.007 21:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:31.007 21:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:31.007 21:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:31.007 21:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:31.007 21:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:31.007 21:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:31.007 21:40:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.007 21:40:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.007 21:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:31.007 21:40:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.007 21:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:31.007 "name": "Existed_Raid", 00:13:31.007 "uuid": "6478681e-d448-4c51-a48b-308415dc8b2b", 00:13:31.007 "strip_size_kb": 0, 00:13:31.007 "state": "configuring", 00:13:31.007 "raid_level": "raid1", 00:13:31.007 "superblock": true, 00:13:31.007 "num_base_bdevs": 4, 00:13:31.007 "num_base_bdevs_discovered": 1, 00:13:31.007 "num_base_bdevs_operational": 4, 00:13:31.007 "base_bdevs_list": [ 00:13:31.007 { 00:13:31.007 "name": "BaseBdev1", 00:13:31.007 "uuid": "06db1c90-8cfa-4b29-a78d-e7ad4f592213", 00:13:31.007 "is_configured": true, 00:13:31.007 "data_offset": 2048, 00:13:31.007 "data_size": 63488 00:13:31.007 }, 00:13:31.007 { 00:13:31.007 "name": "BaseBdev2", 00:13:31.007 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:31.007 "is_configured": false, 00:13:31.007 "data_offset": 0, 00:13:31.007 "data_size": 0 00:13:31.007 }, 00:13:31.007 { 00:13:31.007 "name": "BaseBdev3", 00:13:31.007 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:31.007 "is_configured": false, 00:13:31.007 "data_offset": 0, 00:13:31.007 "data_size": 0 00:13:31.007 }, 00:13:31.007 { 00:13:31.007 "name": "BaseBdev4", 00:13:31.007 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:31.007 "is_configured": false, 00:13:31.007 "data_offset": 0, 00:13:31.007 "data_size": 0 00:13:31.007 } 00:13:31.007 ] 00:13:31.007 }' 00:13:31.008 21:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:31.008 21:40:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.577 21:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:31.577 21:40:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.577 21:40:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.577 [2024-12-10 21:40:32.105929] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:31.577 [2024-12-10 21:40:32.106058] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:13:31.577 21:40:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.577 21:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:31.577 21:40:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.577 21:40:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.577 [2024-12-10 21:40:32.113952] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:31.577 [2024-12-10 21:40:32.115819] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:31.577 [2024-12-10 21:40:32.115916] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:31.577 [2024-12-10 21:40:32.115932] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:31.577 [2024-12-10 21:40:32.115944] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:31.577 [2024-12-10 21:40:32.115952] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:31.577 [2024-12-10 21:40:32.115961] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:31.577 21:40:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.577 21:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:13:31.577 21:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:31.577 21:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:31.577 21:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:31.577 21:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:31.577 21:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:31.577 21:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:31.577 21:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:31.577 21:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:31.577 21:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:31.577 21:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:31.577 21:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:31.577 21:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:31.577 21:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:31.577 21:40:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.577 21:40:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.577 21:40:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.577 21:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:31.577 "name": "Existed_Raid", 00:13:31.577 "uuid": "98cc4cda-37f2-40d3-b57e-e33ba2ee775e", 00:13:31.577 "strip_size_kb": 0, 00:13:31.577 "state": "configuring", 00:13:31.577 "raid_level": "raid1", 00:13:31.577 "superblock": true, 00:13:31.577 "num_base_bdevs": 4, 00:13:31.577 "num_base_bdevs_discovered": 1, 00:13:31.577 "num_base_bdevs_operational": 4, 00:13:31.577 "base_bdevs_list": [ 00:13:31.577 { 00:13:31.577 "name": "BaseBdev1", 00:13:31.577 "uuid": "06db1c90-8cfa-4b29-a78d-e7ad4f592213", 00:13:31.577 "is_configured": true, 00:13:31.577 "data_offset": 2048, 00:13:31.577 "data_size": 63488 00:13:31.577 }, 00:13:31.577 { 00:13:31.577 "name": "BaseBdev2", 00:13:31.577 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:31.577 "is_configured": false, 00:13:31.577 "data_offset": 0, 00:13:31.577 "data_size": 0 00:13:31.577 }, 00:13:31.577 { 00:13:31.577 "name": "BaseBdev3", 00:13:31.577 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:31.577 "is_configured": false, 00:13:31.577 "data_offset": 0, 00:13:31.577 "data_size": 0 00:13:31.577 }, 00:13:31.577 { 00:13:31.577 "name": "BaseBdev4", 00:13:31.577 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:31.577 "is_configured": false, 00:13:31.577 "data_offset": 0, 00:13:31.577 "data_size": 0 00:13:31.577 } 00:13:31.577 ] 00:13:31.577 }' 00:13:31.577 21:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:31.577 21:40:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.837 21:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:31.837 21:40:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.837 21:40:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.096 [2024-12-10 21:40:32.648978] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:32.096 BaseBdev2 00:13:32.096 21:40:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.096 21:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:13:32.096 21:40:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:13:32.096 21:40:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:32.096 21:40:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:32.096 21:40:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:32.096 21:40:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:32.096 21:40:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:32.096 21:40:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.096 21:40:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.096 21:40:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.096 21:40:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:32.096 21:40:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.096 21:40:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.096 [ 00:13:32.096 { 00:13:32.096 "name": "BaseBdev2", 00:13:32.096 "aliases": [ 00:13:32.096 "33cff366-7747-475c-b5af-f640eb8262c0" 00:13:32.096 ], 00:13:32.096 "product_name": "Malloc disk", 00:13:32.096 "block_size": 512, 00:13:32.096 "num_blocks": 65536, 00:13:32.096 "uuid": "33cff366-7747-475c-b5af-f640eb8262c0", 00:13:32.096 "assigned_rate_limits": { 00:13:32.096 "rw_ios_per_sec": 0, 00:13:32.096 "rw_mbytes_per_sec": 0, 00:13:32.096 "r_mbytes_per_sec": 0, 00:13:32.096 "w_mbytes_per_sec": 0 00:13:32.096 }, 00:13:32.096 "claimed": true, 00:13:32.096 "claim_type": "exclusive_write", 00:13:32.096 "zoned": false, 00:13:32.096 "supported_io_types": { 00:13:32.096 "read": true, 00:13:32.096 "write": true, 00:13:32.096 "unmap": true, 00:13:32.096 "flush": true, 00:13:32.096 "reset": true, 00:13:32.096 "nvme_admin": false, 00:13:32.096 "nvme_io": false, 00:13:32.096 "nvme_io_md": false, 00:13:32.096 "write_zeroes": true, 00:13:32.096 "zcopy": true, 00:13:32.096 "get_zone_info": false, 00:13:32.096 "zone_management": false, 00:13:32.096 "zone_append": false, 00:13:32.096 "compare": false, 00:13:32.096 "compare_and_write": false, 00:13:32.096 "abort": true, 00:13:32.096 "seek_hole": false, 00:13:32.096 "seek_data": false, 00:13:32.096 "copy": true, 00:13:32.096 "nvme_iov_md": false 00:13:32.096 }, 00:13:32.096 "memory_domains": [ 00:13:32.096 { 00:13:32.096 "dma_device_id": "system", 00:13:32.096 "dma_device_type": 1 00:13:32.096 }, 00:13:32.096 { 00:13:32.096 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:32.096 "dma_device_type": 2 00:13:32.096 } 00:13:32.096 ], 00:13:32.096 "driver_specific": {} 00:13:32.096 } 00:13:32.096 ] 00:13:32.096 21:40:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.096 21:40:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:32.096 21:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:32.096 21:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:32.096 21:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:32.096 21:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:32.096 21:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:32.096 21:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:32.096 21:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:32.096 21:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:32.096 21:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:32.096 21:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:32.096 21:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:32.096 21:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:32.096 21:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:32.096 21:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:32.096 21:40:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.096 21:40:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.096 21:40:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.096 21:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:32.096 "name": "Existed_Raid", 00:13:32.096 "uuid": "98cc4cda-37f2-40d3-b57e-e33ba2ee775e", 00:13:32.096 "strip_size_kb": 0, 00:13:32.096 "state": "configuring", 00:13:32.096 "raid_level": "raid1", 00:13:32.096 "superblock": true, 00:13:32.096 "num_base_bdevs": 4, 00:13:32.096 "num_base_bdevs_discovered": 2, 00:13:32.096 "num_base_bdevs_operational": 4, 00:13:32.096 "base_bdevs_list": [ 00:13:32.096 { 00:13:32.096 "name": "BaseBdev1", 00:13:32.096 "uuid": "06db1c90-8cfa-4b29-a78d-e7ad4f592213", 00:13:32.096 "is_configured": true, 00:13:32.096 "data_offset": 2048, 00:13:32.096 "data_size": 63488 00:13:32.096 }, 00:13:32.096 { 00:13:32.096 "name": "BaseBdev2", 00:13:32.096 "uuid": "33cff366-7747-475c-b5af-f640eb8262c0", 00:13:32.096 "is_configured": true, 00:13:32.096 "data_offset": 2048, 00:13:32.096 "data_size": 63488 00:13:32.096 }, 00:13:32.096 { 00:13:32.096 "name": "BaseBdev3", 00:13:32.096 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:32.096 "is_configured": false, 00:13:32.096 "data_offset": 0, 00:13:32.096 "data_size": 0 00:13:32.096 }, 00:13:32.096 { 00:13:32.096 "name": "BaseBdev4", 00:13:32.096 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:32.096 "is_configured": false, 00:13:32.096 "data_offset": 0, 00:13:32.096 "data_size": 0 00:13:32.096 } 00:13:32.096 ] 00:13:32.096 }' 00:13:32.096 21:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:32.096 21:40:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.356 21:40:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:32.356 21:40:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.356 21:40:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.615 [2024-12-10 21:40:33.184902] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:32.615 BaseBdev3 00:13:32.615 21:40:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.615 21:40:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:13:32.615 21:40:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:13:32.615 21:40:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:32.615 21:40:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:32.615 21:40:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:32.615 21:40:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:32.615 21:40:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:32.615 21:40:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.615 21:40:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.615 21:40:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.615 21:40:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:32.615 21:40:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.615 21:40:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.615 [ 00:13:32.615 { 00:13:32.615 "name": "BaseBdev3", 00:13:32.615 "aliases": [ 00:13:32.615 "182dcd7e-5c79-4393-8a35-d149742da25c" 00:13:32.615 ], 00:13:32.615 "product_name": "Malloc disk", 00:13:32.615 "block_size": 512, 00:13:32.615 "num_blocks": 65536, 00:13:32.615 "uuid": "182dcd7e-5c79-4393-8a35-d149742da25c", 00:13:32.615 "assigned_rate_limits": { 00:13:32.615 "rw_ios_per_sec": 0, 00:13:32.615 "rw_mbytes_per_sec": 0, 00:13:32.615 "r_mbytes_per_sec": 0, 00:13:32.615 "w_mbytes_per_sec": 0 00:13:32.615 }, 00:13:32.615 "claimed": true, 00:13:32.615 "claim_type": "exclusive_write", 00:13:32.615 "zoned": false, 00:13:32.615 "supported_io_types": { 00:13:32.615 "read": true, 00:13:32.615 "write": true, 00:13:32.615 "unmap": true, 00:13:32.615 "flush": true, 00:13:32.615 "reset": true, 00:13:32.615 "nvme_admin": false, 00:13:32.615 "nvme_io": false, 00:13:32.615 "nvme_io_md": false, 00:13:32.615 "write_zeroes": true, 00:13:32.615 "zcopy": true, 00:13:32.615 "get_zone_info": false, 00:13:32.615 "zone_management": false, 00:13:32.615 "zone_append": false, 00:13:32.615 "compare": false, 00:13:32.615 "compare_and_write": false, 00:13:32.615 "abort": true, 00:13:32.615 "seek_hole": false, 00:13:32.615 "seek_data": false, 00:13:32.615 "copy": true, 00:13:32.615 "nvme_iov_md": false 00:13:32.615 }, 00:13:32.615 "memory_domains": [ 00:13:32.615 { 00:13:32.615 "dma_device_id": "system", 00:13:32.615 "dma_device_type": 1 00:13:32.615 }, 00:13:32.615 { 00:13:32.615 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:32.615 "dma_device_type": 2 00:13:32.615 } 00:13:32.615 ], 00:13:32.615 "driver_specific": {} 00:13:32.615 } 00:13:32.615 ] 00:13:32.615 21:40:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.615 21:40:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:32.615 21:40:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:32.615 21:40:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:32.615 21:40:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:32.615 21:40:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:32.615 21:40:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:32.615 21:40:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:32.615 21:40:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:32.615 21:40:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:32.615 21:40:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:32.615 21:40:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:32.615 21:40:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:32.615 21:40:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:32.615 21:40:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:32.615 21:40:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:32.615 21:40:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.615 21:40:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.615 21:40:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.615 21:40:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:32.615 "name": "Existed_Raid", 00:13:32.615 "uuid": "98cc4cda-37f2-40d3-b57e-e33ba2ee775e", 00:13:32.615 "strip_size_kb": 0, 00:13:32.615 "state": "configuring", 00:13:32.615 "raid_level": "raid1", 00:13:32.615 "superblock": true, 00:13:32.615 "num_base_bdevs": 4, 00:13:32.615 "num_base_bdevs_discovered": 3, 00:13:32.615 "num_base_bdevs_operational": 4, 00:13:32.615 "base_bdevs_list": [ 00:13:32.615 { 00:13:32.615 "name": "BaseBdev1", 00:13:32.615 "uuid": "06db1c90-8cfa-4b29-a78d-e7ad4f592213", 00:13:32.615 "is_configured": true, 00:13:32.615 "data_offset": 2048, 00:13:32.615 "data_size": 63488 00:13:32.615 }, 00:13:32.615 { 00:13:32.615 "name": "BaseBdev2", 00:13:32.615 "uuid": "33cff366-7747-475c-b5af-f640eb8262c0", 00:13:32.615 "is_configured": true, 00:13:32.615 "data_offset": 2048, 00:13:32.615 "data_size": 63488 00:13:32.615 }, 00:13:32.615 { 00:13:32.615 "name": "BaseBdev3", 00:13:32.615 "uuid": "182dcd7e-5c79-4393-8a35-d149742da25c", 00:13:32.615 "is_configured": true, 00:13:32.615 "data_offset": 2048, 00:13:32.615 "data_size": 63488 00:13:32.615 }, 00:13:32.615 { 00:13:32.615 "name": "BaseBdev4", 00:13:32.615 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:32.615 "is_configured": false, 00:13:32.615 "data_offset": 0, 00:13:32.615 "data_size": 0 00:13:32.615 } 00:13:32.615 ] 00:13:32.615 }' 00:13:32.616 21:40:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:32.616 21:40:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:33.186 21:40:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:13:33.186 21:40:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.186 21:40:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:33.186 [2024-12-10 21:40:33.725586] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:33.186 [2024-12-10 21:40:33.725979] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:13:33.186 [2024-12-10 21:40:33.726040] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:33.186 [2024-12-10 21:40:33.726364] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:13:33.186 [2024-12-10 21:40:33.726616] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:13:33.186 BaseBdev4 00:13:33.186 [2024-12-10 21:40:33.726682] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:13:33.186 [2024-12-10 21:40:33.726910] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:33.186 21:40:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.186 21:40:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:13:33.186 21:40:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:13:33.186 21:40:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:33.186 21:40:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:33.186 21:40:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:33.186 21:40:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:33.186 21:40:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:33.186 21:40:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.186 21:40:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:33.186 21:40:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.186 21:40:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:13:33.186 21:40:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.186 21:40:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:33.186 [ 00:13:33.186 { 00:13:33.186 "name": "BaseBdev4", 00:13:33.186 "aliases": [ 00:13:33.186 "b4a110cf-b873-4332-97bb-26f1d1983e44" 00:13:33.186 ], 00:13:33.186 "product_name": "Malloc disk", 00:13:33.186 "block_size": 512, 00:13:33.186 "num_blocks": 65536, 00:13:33.186 "uuid": "b4a110cf-b873-4332-97bb-26f1d1983e44", 00:13:33.186 "assigned_rate_limits": { 00:13:33.186 "rw_ios_per_sec": 0, 00:13:33.186 "rw_mbytes_per_sec": 0, 00:13:33.186 "r_mbytes_per_sec": 0, 00:13:33.186 "w_mbytes_per_sec": 0 00:13:33.186 }, 00:13:33.186 "claimed": true, 00:13:33.186 "claim_type": "exclusive_write", 00:13:33.186 "zoned": false, 00:13:33.186 "supported_io_types": { 00:13:33.186 "read": true, 00:13:33.186 "write": true, 00:13:33.186 "unmap": true, 00:13:33.186 "flush": true, 00:13:33.186 "reset": true, 00:13:33.186 "nvme_admin": false, 00:13:33.186 "nvme_io": false, 00:13:33.186 "nvme_io_md": false, 00:13:33.186 "write_zeroes": true, 00:13:33.186 "zcopy": true, 00:13:33.186 "get_zone_info": false, 00:13:33.186 "zone_management": false, 00:13:33.186 "zone_append": false, 00:13:33.186 "compare": false, 00:13:33.186 "compare_and_write": false, 00:13:33.186 "abort": true, 00:13:33.186 "seek_hole": false, 00:13:33.186 "seek_data": false, 00:13:33.186 "copy": true, 00:13:33.186 "nvme_iov_md": false 00:13:33.186 }, 00:13:33.186 "memory_domains": [ 00:13:33.186 { 00:13:33.186 "dma_device_id": "system", 00:13:33.186 "dma_device_type": 1 00:13:33.186 }, 00:13:33.186 { 00:13:33.186 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:33.186 "dma_device_type": 2 00:13:33.186 } 00:13:33.186 ], 00:13:33.186 "driver_specific": {} 00:13:33.186 } 00:13:33.186 ] 00:13:33.186 21:40:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.186 21:40:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:33.186 21:40:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:33.186 21:40:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:33.186 21:40:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:13:33.186 21:40:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:33.186 21:40:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:33.186 21:40:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:33.187 21:40:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:33.187 21:40:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:33.187 21:40:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:33.187 21:40:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:33.187 21:40:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:33.187 21:40:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:33.187 21:40:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:33.187 21:40:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:33.187 21:40:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.187 21:40:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:33.187 21:40:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.187 21:40:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:33.187 "name": "Existed_Raid", 00:13:33.187 "uuid": "98cc4cda-37f2-40d3-b57e-e33ba2ee775e", 00:13:33.187 "strip_size_kb": 0, 00:13:33.187 "state": "online", 00:13:33.187 "raid_level": "raid1", 00:13:33.187 "superblock": true, 00:13:33.187 "num_base_bdevs": 4, 00:13:33.187 "num_base_bdevs_discovered": 4, 00:13:33.187 "num_base_bdevs_operational": 4, 00:13:33.187 "base_bdevs_list": [ 00:13:33.187 { 00:13:33.187 "name": "BaseBdev1", 00:13:33.187 "uuid": "06db1c90-8cfa-4b29-a78d-e7ad4f592213", 00:13:33.187 "is_configured": true, 00:13:33.187 "data_offset": 2048, 00:13:33.187 "data_size": 63488 00:13:33.187 }, 00:13:33.187 { 00:13:33.187 "name": "BaseBdev2", 00:13:33.187 "uuid": "33cff366-7747-475c-b5af-f640eb8262c0", 00:13:33.187 "is_configured": true, 00:13:33.187 "data_offset": 2048, 00:13:33.187 "data_size": 63488 00:13:33.187 }, 00:13:33.187 { 00:13:33.187 "name": "BaseBdev3", 00:13:33.187 "uuid": "182dcd7e-5c79-4393-8a35-d149742da25c", 00:13:33.187 "is_configured": true, 00:13:33.187 "data_offset": 2048, 00:13:33.187 "data_size": 63488 00:13:33.187 }, 00:13:33.187 { 00:13:33.187 "name": "BaseBdev4", 00:13:33.187 "uuid": "b4a110cf-b873-4332-97bb-26f1d1983e44", 00:13:33.187 "is_configured": true, 00:13:33.187 "data_offset": 2048, 00:13:33.187 "data_size": 63488 00:13:33.187 } 00:13:33.187 ] 00:13:33.187 }' 00:13:33.187 21:40:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:33.187 21:40:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:33.446 21:40:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:13:33.446 21:40:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:33.446 21:40:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:33.447 21:40:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:33.447 21:40:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:13:33.447 21:40:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:33.447 21:40:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:33.447 21:40:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:33.447 21:40:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.447 21:40:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:33.447 [2024-12-10 21:40:34.201258] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:33.447 21:40:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.447 21:40:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:33.447 "name": "Existed_Raid", 00:13:33.447 "aliases": [ 00:13:33.447 "98cc4cda-37f2-40d3-b57e-e33ba2ee775e" 00:13:33.447 ], 00:13:33.447 "product_name": "Raid Volume", 00:13:33.447 "block_size": 512, 00:13:33.447 "num_blocks": 63488, 00:13:33.447 "uuid": "98cc4cda-37f2-40d3-b57e-e33ba2ee775e", 00:13:33.447 "assigned_rate_limits": { 00:13:33.447 "rw_ios_per_sec": 0, 00:13:33.447 "rw_mbytes_per_sec": 0, 00:13:33.447 "r_mbytes_per_sec": 0, 00:13:33.447 "w_mbytes_per_sec": 0 00:13:33.447 }, 00:13:33.447 "claimed": false, 00:13:33.447 "zoned": false, 00:13:33.447 "supported_io_types": { 00:13:33.447 "read": true, 00:13:33.447 "write": true, 00:13:33.447 "unmap": false, 00:13:33.447 "flush": false, 00:13:33.447 "reset": true, 00:13:33.447 "nvme_admin": false, 00:13:33.447 "nvme_io": false, 00:13:33.447 "nvme_io_md": false, 00:13:33.447 "write_zeroes": true, 00:13:33.447 "zcopy": false, 00:13:33.447 "get_zone_info": false, 00:13:33.447 "zone_management": false, 00:13:33.447 "zone_append": false, 00:13:33.447 "compare": false, 00:13:33.447 "compare_and_write": false, 00:13:33.447 "abort": false, 00:13:33.447 "seek_hole": false, 00:13:33.447 "seek_data": false, 00:13:33.447 "copy": false, 00:13:33.447 "nvme_iov_md": false 00:13:33.447 }, 00:13:33.447 "memory_domains": [ 00:13:33.447 { 00:13:33.447 "dma_device_id": "system", 00:13:33.447 "dma_device_type": 1 00:13:33.447 }, 00:13:33.447 { 00:13:33.447 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:33.447 "dma_device_type": 2 00:13:33.447 }, 00:13:33.447 { 00:13:33.447 "dma_device_id": "system", 00:13:33.447 "dma_device_type": 1 00:13:33.447 }, 00:13:33.447 { 00:13:33.447 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:33.447 "dma_device_type": 2 00:13:33.447 }, 00:13:33.447 { 00:13:33.447 "dma_device_id": "system", 00:13:33.447 "dma_device_type": 1 00:13:33.447 }, 00:13:33.447 { 00:13:33.447 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:33.447 "dma_device_type": 2 00:13:33.447 }, 00:13:33.447 { 00:13:33.447 "dma_device_id": "system", 00:13:33.447 "dma_device_type": 1 00:13:33.447 }, 00:13:33.447 { 00:13:33.447 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:33.447 "dma_device_type": 2 00:13:33.447 } 00:13:33.447 ], 00:13:33.447 "driver_specific": { 00:13:33.447 "raid": { 00:13:33.447 "uuid": "98cc4cda-37f2-40d3-b57e-e33ba2ee775e", 00:13:33.447 "strip_size_kb": 0, 00:13:33.447 "state": "online", 00:13:33.447 "raid_level": "raid1", 00:13:33.447 "superblock": true, 00:13:33.447 "num_base_bdevs": 4, 00:13:33.447 "num_base_bdevs_discovered": 4, 00:13:33.447 "num_base_bdevs_operational": 4, 00:13:33.447 "base_bdevs_list": [ 00:13:33.447 { 00:13:33.447 "name": "BaseBdev1", 00:13:33.447 "uuid": "06db1c90-8cfa-4b29-a78d-e7ad4f592213", 00:13:33.447 "is_configured": true, 00:13:33.447 "data_offset": 2048, 00:13:33.447 "data_size": 63488 00:13:33.447 }, 00:13:33.447 { 00:13:33.447 "name": "BaseBdev2", 00:13:33.447 "uuid": "33cff366-7747-475c-b5af-f640eb8262c0", 00:13:33.447 "is_configured": true, 00:13:33.447 "data_offset": 2048, 00:13:33.447 "data_size": 63488 00:13:33.447 }, 00:13:33.447 { 00:13:33.447 "name": "BaseBdev3", 00:13:33.447 "uuid": "182dcd7e-5c79-4393-8a35-d149742da25c", 00:13:33.447 "is_configured": true, 00:13:33.447 "data_offset": 2048, 00:13:33.447 "data_size": 63488 00:13:33.447 }, 00:13:33.447 { 00:13:33.447 "name": "BaseBdev4", 00:13:33.447 "uuid": "b4a110cf-b873-4332-97bb-26f1d1983e44", 00:13:33.447 "is_configured": true, 00:13:33.447 "data_offset": 2048, 00:13:33.447 "data_size": 63488 00:13:33.447 } 00:13:33.447 ] 00:13:33.447 } 00:13:33.447 } 00:13:33.447 }' 00:13:33.706 21:40:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:33.706 21:40:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:13:33.706 BaseBdev2 00:13:33.706 BaseBdev3 00:13:33.706 BaseBdev4' 00:13:33.706 21:40:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:33.706 21:40:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:33.706 21:40:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:33.706 21:40:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:33.706 21:40:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:13:33.706 21:40:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.706 21:40:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:33.706 21:40:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.706 21:40:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:33.706 21:40:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:33.706 21:40:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:33.706 21:40:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:33.706 21:40:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.706 21:40:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:33.706 21:40:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:33.707 21:40:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.707 21:40:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:33.707 21:40:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:33.707 21:40:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:33.707 21:40:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:33.707 21:40:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:33.707 21:40:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.707 21:40:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:33.707 21:40:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.707 21:40:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:33.707 21:40:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:33.707 21:40:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:33.707 21:40:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:13:33.707 21:40:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.707 21:40:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:33.707 21:40:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:33.707 21:40:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.965 21:40:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:33.966 21:40:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:33.966 21:40:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:33.966 21:40:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.966 21:40:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:33.966 [2024-12-10 21:40:34.504481] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:33.966 21:40:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.966 21:40:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:13:33.966 21:40:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:13:33.966 21:40:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:33.966 21:40:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:13:33.966 21:40:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:13:33.966 21:40:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:13:33.966 21:40:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:33.966 21:40:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:33.966 21:40:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:33.966 21:40:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:33.966 21:40:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:33.966 21:40:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:33.966 21:40:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:33.966 21:40:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:33.966 21:40:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:33.966 21:40:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:33.966 21:40:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:33.966 21:40:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.966 21:40:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:33.966 21:40:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.966 21:40:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:33.966 "name": "Existed_Raid", 00:13:33.966 "uuid": "98cc4cda-37f2-40d3-b57e-e33ba2ee775e", 00:13:33.966 "strip_size_kb": 0, 00:13:33.966 "state": "online", 00:13:33.966 "raid_level": "raid1", 00:13:33.966 "superblock": true, 00:13:33.966 "num_base_bdevs": 4, 00:13:33.966 "num_base_bdevs_discovered": 3, 00:13:33.966 "num_base_bdevs_operational": 3, 00:13:33.966 "base_bdevs_list": [ 00:13:33.966 { 00:13:33.966 "name": null, 00:13:33.966 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:33.966 "is_configured": false, 00:13:33.966 "data_offset": 0, 00:13:33.966 "data_size": 63488 00:13:33.966 }, 00:13:33.966 { 00:13:33.966 "name": "BaseBdev2", 00:13:33.966 "uuid": "33cff366-7747-475c-b5af-f640eb8262c0", 00:13:33.966 "is_configured": true, 00:13:33.966 "data_offset": 2048, 00:13:33.966 "data_size": 63488 00:13:33.966 }, 00:13:33.966 { 00:13:33.966 "name": "BaseBdev3", 00:13:33.966 "uuid": "182dcd7e-5c79-4393-8a35-d149742da25c", 00:13:33.966 "is_configured": true, 00:13:33.966 "data_offset": 2048, 00:13:33.966 "data_size": 63488 00:13:33.966 }, 00:13:33.966 { 00:13:33.966 "name": "BaseBdev4", 00:13:33.966 "uuid": "b4a110cf-b873-4332-97bb-26f1d1983e44", 00:13:33.966 "is_configured": true, 00:13:33.966 "data_offset": 2048, 00:13:33.966 "data_size": 63488 00:13:33.966 } 00:13:33.966 ] 00:13:33.966 }' 00:13:33.966 21:40:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:33.966 21:40:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.537 21:40:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:13:34.537 21:40:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:34.537 21:40:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:34.537 21:40:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:34.537 21:40:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.537 21:40:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.537 21:40:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.537 21:40:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:34.537 21:40:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:34.537 21:40:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:13:34.537 21:40:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.537 21:40:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.537 [2024-12-10 21:40:35.102700] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:34.538 21:40:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.538 21:40:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:34.538 21:40:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:34.538 21:40:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:34.538 21:40:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:34.538 21:40:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.538 21:40:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.538 21:40:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.538 21:40:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:34.538 21:40:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:34.538 21:40:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:13:34.538 21:40:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.538 21:40:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.538 [2024-12-10 21:40:35.262771] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:34.797 21:40:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.797 21:40:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:34.797 21:40:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:34.797 21:40:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:34.797 21:40:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:34.797 21:40:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.797 21:40:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.797 21:40:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.797 21:40:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:34.797 21:40:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:34.797 21:40:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:13:34.797 21:40:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.797 21:40:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.797 [2024-12-10 21:40:35.420284] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:13:34.797 [2024-12-10 21:40:35.420488] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:34.797 [2024-12-10 21:40:35.519597] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:34.797 [2024-12-10 21:40:35.519761] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:34.797 [2024-12-10 21:40:35.519812] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:13:34.797 21:40:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.797 21:40:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:34.797 21:40:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:34.797 21:40:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:34.797 21:40:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:13:34.797 21:40:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.797 21:40:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.797 21:40:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.797 21:40:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:13:34.797 21:40:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:13:34.797 21:40:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:13:34.797 21:40:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:13:34.797 21:40:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:34.797 21:40:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:34.797 21:40:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.797 21:40:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:35.057 BaseBdev2 00:13:35.057 21:40:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.057 21:40:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:13:35.057 21:40:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:13:35.057 21:40:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:35.057 21:40:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:35.057 21:40:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:35.057 21:40:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:35.057 21:40:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:35.058 21:40:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.058 21:40:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:35.058 21:40:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.058 21:40:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:35.058 21:40:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.058 21:40:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:35.058 [ 00:13:35.058 { 00:13:35.058 "name": "BaseBdev2", 00:13:35.058 "aliases": [ 00:13:35.058 "ea8ac440-344d-49fa-97b8-cc4edf9f3f9f" 00:13:35.058 ], 00:13:35.058 "product_name": "Malloc disk", 00:13:35.058 "block_size": 512, 00:13:35.058 "num_blocks": 65536, 00:13:35.058 "uuid": "ea8ac440-344d-49fa-97b8-cc4edf9f3f9f", 00:13:35.058 "assigned_rate_limits": { 00:13:35.058 "rw_ios_per_sec": 0, 00:13:35.058 "rw_mbytes_per_sec": 0, 00:13:35.058 "r_mbytes_per_sec": 0, 00:13:35.058 "w_mbytes_per_sec": 0 00:13:35.058 }, 00:13:35.058 "claimed": false, 00:13:35.058 "zoned": false, 00:13:35.058 "supported_io_types": { 00:13:35.058 "read": true, 00:13:35.058 "write": true, 00:13:35.058 "unmap": true, 00:13:35.058 "flush": true, 00:13:35.058 "reset": true, 00:13:35.058 "nvme_admin": false, 00:13:35.058 "nvme_io": false, 00:13:35.058 "nvme_io_md": false, 00:13:35.058 "write_zeroes": true, 00:13:35.058 "zcopy": true, 00:13:35.058 "get_zone_info": false, 00:13:35.058 "zone_management": false, 00:13:35.058 "zone_append": false, 00:13:35.058 "compare": false, 00:13:35.058 "compare_and_write": false, 00:13:35.058 "abort": true, 00:13:35.058 "seek_hole": false, 00:13:35.058 "seek_data": false, 00:13:35.058 "copy": true, 00:13:35.058 "nvme_iov_md": false 00:13:35.058 }, 00:13:35.058 "memory_domains": [ 00:13:35.058 { 00:13:35.058 "dma_device_id": "system", 00:13:35.058 "dma_device_type": 1 00:13:35.058 }, 00:13:35.058 { 00:13:35.058 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:35.058 "dma_device_type": 2 00:13:35.058 } 00:13:35.058 ], 00:13:35.058 "driver_specific": {} 00:13:35.058 } 00:13:35.058 ] 00:13:35.058 21:40:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.058 21:40:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:35.058 21:40:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:35.058 21:40:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:35.058 21:40:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:35.058 21:40:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.058 21:40:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:35.058 BaseBdev3 00:13:35.058 21:40:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.058 21:40:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:13:35.058 21:40:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:13:35.058 21:40:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:35.058 21:40:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:35.058 21:40:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:35.058 21:40:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:35.058 21:40:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:35.058 21:40:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.058 21:40:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:35.058 21:40:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.058 21:40:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:35.058 21:40:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.058 21:40:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:35.058 [ 00:13:35.058 { 00:13:35.058 "name": "BaseBdev3", 00:13:35.058 "aliases": [ 00:13:35.058 "506c9bcd-5920-41f8-bd2f-f161b37db383" 00:13:35.058 ], 00:13:35.058 "product_name": "Malloc disk", 00:13:35.058 "block_size": 512, 00:13:35.058 "num_blocks": 65536, 00:13:35.058 "uuid": "506c9bcd-5920-41f8-bd2f-f161b37db383", 00:13:35.058 "assigned_rate_limits": { 00:13:35.058 "rw_ios_per_sec": 0, 00:13:35.058 "rw_mbytes_per_sec": 0, 00:13:35.058 "r_mbytes_per_sec": 0, 00:13:35.058 "w_mbytes_per_sec": 0 00:13:35.058 }, 00:13:35.058 "claimed": false, 00:13:35.058 "zoned": false, 00:13:35.058 "supported_io_types": { 00:13:35.058 "read": true, 00:13:35.058 "write": true, 00:13:35.058 "unmap": true, 00:13:35.058 "flush": true, 00:13:35.058 "reset": true, 00:13:35.058 "nvme_admin": false, 00:13:35.058 "nvme_io": false, 00:13:35.058 "nvme_io_md": false, 00:13:35.058 "write_zeroes": true, 00:13:35.058 "zcopy": true, 00:13:35.058 "get_zone_info": false, 00:13:35.058 "zone_management": false, 00:13:35.058 "zone_append": false, 00:13:35.058 "compare": false, 00:13:35.058 "compare_and_write": false, 00:13:35.058 "abort": true, 00:13:35.058 "seek_hole": false, 00:13:35.058 "seek_data": false, 00:13:35.058 "copy": true, 00:13:35.058 "nvme_iov_md": false 00:13:35.058 }, 00:13:35.058 "memory_domains": [ 00:13:35.058 { 00:13:35.058 "dma_device_id": "system", 00:13:35.058 "dma_device_type": 1 00:13:35.058 }, 00:13:35.058 { 00:13:35.058 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:35.058 "dma_device_type": 2 00:13:35.058 } 00:13:35.058 ], 00:13:35.058 "driver_specific": {} 00:13:35.058 } 00:13:35.058 ] 00:13:35.058 21:40:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.058 21:40:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:35.058 21:40:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:35.058 21:40:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:35.058 21:40:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:13:35.058 21:40:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.058 21:40:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:35.058 BaseBdev4 00:13:35.058 21:40:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.058 21:40:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:13:35.058 21:40:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:13:35.058 21:40:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:35.058 21:40:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:35.058 21:40:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:35.058 21:40:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:35.058 21:40:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:35.058 21:40:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.058 21:40:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:35.058 21:40:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.058 21:40:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:13:35.058 21:40:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.058 21:40:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:35.058 [ 00:13:35.058 { 00:13:35.058 "name": "BaseBdev4", 00:13:35.058 "aliases": [ 00:13:35.058 "fc5685d9-3781-46bb-a1f3-a84ce4070538" 00:13:35.058 ], 00:13:35.058 "product_name": "Malloc disk", 00:13:35.058 "block_size": 512, 00:13:35.058 "num_blocks": 65536, 00:13:35.058 "uuid": "fc5685d9-3781-46bb-a1f3-a84ce4070538", 00:13:35.058 "assigned_rate_limits": { 00:13:35.058 "rw_ios_per_sec": 0, 00:13:35.058 "rw_mbytes_per_sec": 0, 00:13:35.058 "r_mbytes_per_sec": 0, 00:13:35.058 "w_mbytes_per_sec": 0 00:13:35.058 }, 00:13:35.058 "claimed": false, 00:13:35.058 "zoned": false, 00:13:35.058 "supported_io_types": { 00:13:35.058 "read": true, 00:13:35.058 "write": true, 00:13:35.058 "unmap": true, 00:13:35.058 "flush": true, 00:13:35.058 "reset": true, 00:13:35.058 "nvme_admin": false, 00:13:35.058 "nvme_io": false, 00:13:35.058 "nvme_io_md": false, 00:13:35.058 "write_zeroes": true, 00:13:35.058 "zcopy": true, 00:13:35.058 "get_zone_info": false, 00:13:35.058 "zone_management": false, 00:13:35.058 "zone_append": false, 00:13:35.058 "compare": false, 00:13:35.058 "compare_and_write": false, 00:13:35.058 "abort": true, 00:13:35.058 "seek_hole": false, 00:13:35.058 "seek_data": false, 00:13:35.058 "copy": true, 00:13:35.058 "nvme_iov_md": false 00:13:35.058 }, 00:13:35.058 "memory_domains": [ 00:13:35.058 { 00:13:35.058 "dma_device_id": "system", 00:13:35.058 "dma_device_type": 1 00:13:35.058 }, 00:13:35.058 { 00:13:35.058 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:35.058 "dma_device_type": 2 00:13:35.058 } 00:13:35.058 ], 00:13:35.058 "driver_specific": {} 00:13:35.058 } 00:13:35.058 ] 00:13:35.058 21:40:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.058 21:40:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:35.058 21:40:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:35.059 21:40:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:35.059 21:40:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:35.059 21:40:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.059 21:40:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:35.059 [2024-12-10 21:40:35.827458] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:35.059 [2024-12-10 21:40:35.827504] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:35.059 [2024-12-10 21:40:35.827545] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:35.059 [2024-12-10 21:40:35.829611] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:35.059 [2024-12-10 21:40:35.829764] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:35.059 21:40:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.059 21:40:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:35.059 21:40:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:35.059 21:40:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:35.059 21:40:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:35.059 21:40:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:35.059 21:40:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:35.059 21:40:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:35.059 21:40:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:35.059 21:40:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:35.059 21:40:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:35.319 21:40:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:35.319 21:40:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:35.319 21:40:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.319 21:40:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:35.319 21:40:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.319 21:40:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:35.319 "name": "Existed_Raid", 00:13:35.319 "uuid": "844d2c9c-0ced-40da-97ec-d02133480b89", 00:13:35.319 "strip_size_kb": 0, 00:13:35.319 "state": "configuring", 00:13:35.319 "raid_level": "raid1", 00:13:35.319 "superblock": true, 00:13:35.319 "num_base_bdevs": 4, 00:13:35.319 "num_base_bdevs_discovered": 3, 00:13:35.319 "num_base_bdevs_operational": 4, 00:13:35.319 "base_bdevs_list": [ 00:13:35.319 { 00:13:35.319 "name": "BaseBdev1", 00:13:35.319 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:35.319 "is_configured": false, 00:13:35.319 "data_offset": 0, 00:13:35.319 "data_size": 0 00:13:35.319 }, 00:13:35.319 { 00:13:35.319 "name": "BaseBdev2", 00:13:35.319 "uuid": "ea8ac440-344d-49fa-97b8-cc4edf9f3f9f", 00:13:35.319 "is_configured": true, 00:13:35.319 "data_offset": 2048, 00:13:35.319 "data_size": 63488 00:13:35.319 }, 00:13:35.319 { 00:13:35.319 "name": "BaseBdev3", 00:13:35.319 "uuid": "506c9bcd-5920-41f8-bd2f-f161b37db383", 00:13:35.319 "is_configured": true, 00:13:35.319 "data_offset": 2048, 00:13:35.319 "data_size": 63488 00:13:35.319 }, 00:13:35.319 { 00:13:35.319 "name": "BaseBdev4", 00:13:35.319 "uuid": "fc5685d9-3781-46bb-a1f3-a84ce4070538", 00:13:35.319 "is_configured": true, 00:13:35.319 "data_offset": 2048, 00:13:35.319 "data_size": 63488 00:13:35.319 } 00:13:35.319 ] 00:13:35.319 }' 00:13:35.319 21:40:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:35.319 21:40:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:35.578 21:40:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:35.578 21:40:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.578 21:40:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:35.578 [2024-12-10 21:40:36.258727] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:35.578 21:40:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.578 21:40:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:35.578 21:40:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:35.578 21:40:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:35.578 21:40:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:35.578 21:40:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:35.578 21:40:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:35.578 21:40:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:35.578 21:40:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:35.578 21:40:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:35.578 21:40:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:35.578 21:40:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:35.578 21:40:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:35.578 21:40:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.578 21:40:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:35.578 21:40:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.578 21:40:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:35.578 "name": "Existed_Raid", 00:13:35.578 "uuid": "844d2c9c-0ced-40da-97ec-d02133480b89", 00:13:35.578 "strip_size_kb": 0, 00:13:35.578 "state": "configuring", 00:13:35.578 "raid_level": "raid1", 00:13:35.578 "superblock": true, 00:13:35.578 "num_base_bdevs": 4, 00:13:35.578 "num_base_bdevs_discovered": 2, 00:13:35.578 "num_base_bdevs_operational": 4, 00:13:35.578 "base_bdevs_list": [ 00:13:35.578 { 00:13:35.578 "name": "BaseBdev1", 00:13:35.578 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:35.578 "is_configured": false, 00:13:35.578 "data_offset": 0, 00:13:35.578 "data_size": 0 00:13:35.578 }, 00:13:35.578 { 00:13:35.578 "name": null, 00:13:35.578 "uuid": "ea8ac440-344d-49fa-97b8-cc4edf9f3f9f", 00:13:35.578 "is_configured": false, 00:13:35.578 "data_offset": 0, 00:13:35.578 "data_size": 63488 00:13:35.578 }, 00:13:35.578 { 00:13:35.578 "name": "BaseBdev3", 00:13:35.578 "uuid": "506c9bcd-5920-41f8-bd2f-f161b37db383", 00:13:35.578 "is_configured": true, 00:13:35.578 "data_offset": 2048, 00:13:35.578 "data_size": 63488 00:13:35.578 }, 00:13:35.578 { 00:13:35.579 "name": "BaseBdev4", 00:13:35.579 "uuid": "fc5685d9-3781-46bb-a1f3-a84ce4070538", 00:13:35.579 "is_configured": true, 00:13:35.579 "data_offset": 2048, 00:13:35.579 "data_size": 63488 00:13:35.579 } 00:13:35.579 ] 00:13:35.579 }' 00:13:35.579 21:40:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:35.579 21:40:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:36.145 21:40:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:36.145 21:40:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:36.145 21:40:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.145 21:40:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:36.145 21:40:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.145 21:40:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:13:36.145 21:40:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:36.145 21:40:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.145 21:40:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:36.145 [2024-12-10 21:40:36.791721] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:36.145 BaseBdev1 00:13:36.145 21:40:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.145 21:40:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:13:36.145 21:40:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:13:36.145 21:40:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:36.145 21:40:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:36.145 21:40:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:36.145 21:40:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:36.145 21:40:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:36.145 21:40:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.145 21:40:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:36.145 21:40:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.145 21:40:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:36.145 21:40:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.145 21:40:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:36.145 [ 00:13:36.145 { 00:13:36.145 "name": "BaseBdev1", 00:13:36.145 "aliases": [ 00:13:36.145 "55d1a0cb-33da-4b80-b2aa-760b16c0d204" 00:13:36.145 ], 00:13:36.145 "product_name": "Malloc disk", 00:13:36.145 "block_size": 512, 00:13:36.145 "num_blocks": 65536, 00:13:36.145 "uuid": "55d1a0cb-33da-4b80-b2aa-760b16c0d204", 00:13:36.145 "assigned_rate_limits": { 00:13:36.145 "rw_ios_per_sec": 0, 00:13:36.145 "rw_mbytes_per_sec": 0, 00:13:36.145 "r_mbytes_per_sec": 0, 00:13:36.145 "w_mbytes_per_sec": 0 00:13:36.145 }, 00:13:36.145 "claimed": true, 00:13:36.145 "claim_type": "exclusive_write", 00:13:36.145 "zoned": false, 00:13:36.145 "supported_io_types": { 00:13:36.145 "read": true, 00:13:36.145 "write": true, 00:13:36.145 "unmap": true, 00:13:36.145 "flush": true, 00:13:36.145 "reset": true, 00:13:36.145 "nvme_admin": false, 00:13:36.145 "nvme_io": false, 00:13:36.145 "nvme_io_md": false, 00:13:36.145 "write_zeroes": true, 00:13:36.145 "zcopy": true, 00:13:36.145 "get_zone_info": false, 00:13:36.145 "zone_management": false, 00:13:36.145 "zone_append": false, 00:13:36.145 "compare": false, 00:13:36.145 "compare_and_write": false, 00:13:36.145 "abort": true, 00:13:36.145 "seek_hole": false, 00:13:36.145 "seek_data": false, 00:13:36.145 "copy": true, 00:13:36.145 "nvme_iov_md": false 00:13:36.145 }, 00:13:36.145 "memory_domains": [ 00:13:36.145 { 00:13:36.145 "dma_device_id": "system", 00:13:36.145 "dma_device_type": 1 00:13:36.145 }, 00:13:36.145 { 00:13:36.145 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:36.145 "dma_device_type": 2 00:13:36.145 } 00:13:36.146 ], 00:13:36.146 "driver_specific": {} 00:13:36.146 } 00:13:36.146 ] 00:13:36.146 21:40:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.146 21:40:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:36.146 21:40:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:36.146 21:40:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:36.146 21:40:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:36.146 21:40:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:36.146 21:40:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:36.146 21:40:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:36.146 21:40:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:36.146 21:40:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:36.146 21:40:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:36.146 21:40:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:36.146 21:40:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:36.146 21:40:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:36.146 21:40:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.146 21:40:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:36.146 21:40:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.146 21:40:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:36.146 "name": "Existed_Raid", 00:13:36.146 "uuid": "844d2c9c-0ced-40da-97ec-d02133480b89", 00:13:36.146 "strip_size_kb": 0, 00:13:36.146 "state": "configuring", 00:13:36.146 "raid_level": "raid1", 00:13:36.146 "superblock": true, 00:13:36.146 "num_base_bdevs": 4, 00:13:36.146 "num_base_bdevs_discovered": 3, 00:13:36.146 "num_base_bdevs_operational": 4, 00:13:36.146 "base_bdevs_list": [ 00:13:36.146 { 00:13:36.146 "name": "BaseBdev1", 00:13:36.146 "uuid": "55d1a0cb-33da-4b80-b2aa-760b16c0d204", 00:13:36.146 "is_configured": true, 00:13:36.146 "data_offset": 2048, 00:13:36.146 "data_size": 63488 00:13:36.146 }, 00:13:36.146 { 00:13:36.146 "name": null, 00:13:36.146 "uuid": "ea8ac440-344d-49fa-97b8-cc4edf9f3f9f", 00:13:36.146 "is_configured": false, 00:13:36.146 "data_offset": 0, 00:13:36.146 "data_size": 63488 00:13:36.146 }, 00:13:36.146 { 00:13:36.146 "name": "BaseBdev3", 00:13:36.146 "uuid": "506c9bcd-5920-41f8-bd2f-f161b37db383", 00:13:36.146 "is_configured": true, 00:13:36.146 "data_offset": 2048, 00:13:36.146 "data_size": 63488 00:13:36.146 }, 00:13:36.146 { 00:13:36.146 "name": "BaseBdev4", 00:13:36.146 "uuid": "fc5685d9-3781-46bb-a1f3-a84ce4070538", 00:13:36.146 "is_configured": true, 00:13:36.146 "data_offset": 2048, 00:13:36.146 "data_size": 63488 00:13:36.146 } 00:13:36.146 ] 00:13:36.146 }' 00:13:36.146 21:40:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:36.146 21:40:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:36.713 21:40:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:36.713 21:40:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:36.713 21:40:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.713 21:40:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:36.713 21:40:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.713 21:40:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:13:36.713 21:40:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:13:36.713 21:40:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.713 21:40:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:36.713 [2024-12-10 21:40:37.318915] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:36.713 21:40:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.714 21:40:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:36.714 21:40:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:36.714 21:40:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:36.714 21:40:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:36.714 21:40:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:36.714 21:40:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:36.714 21:40:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:36.714 21:40:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:36.714 21:40:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:36.714 21:40:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:36.714 21:40:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:36.714 21:40:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:36.714 21:40:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.714 21:40:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:36.714 21:40:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.714 21:40:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:36.714 "name": "Existed_Raid", 00:13:36.714 "uuid": "844d2c9c-0ced-40da-97ec-d02133480b89", 00:13:36.714 "strip_size_kb": 0, 00:13:36.714 "state": "configuring", 00:13:36.714 "raid_level": "raid1", 00:13:36.714 "superblock": true, 00:13:36.714 "num_base_bdevs": 4, 00:13:36.714 "num_base_bdevs_discovered": 2, 00:13:36.714 "num_base_bdevs_operational": 4, 00:13:36.714 "base_bdevs_list": [ 00:13:36.714 { 00:13:36.714 "name": "BaseBdev1", 00:13:36.714 "uuid": "55d1a0cb-33da-4b80-b2aa-760b16c0d204", 00:13:36.714 "is_configured": true, 00:13:36.714 "data_offset": 2048, 00:13:36.714 "data_size": 63488 00:13:36.714 }, 00:13:36.714 { 00:13:36.714 "name": null, 00:13:36.714 "uuid": "ea8ac440-344d-49fa-97b8-cc4edf9f3f9f", 00:13:36.714 "is_configured": false, 00:13:36.714 "data_offset": 0, 00:13:36.714 "data_size": 63488 00:13:36.714 }, 00:13:36.714 { 00:13:36.714 "name": null, 00:13:36.714 "uuid": "506c9bcd-5920-41f8-bd2f-f161b37db383", 00:13:36.714 "is_configured": false, 00:13:36.714 "data_offset": 0, 00:13:36.714 "data_size": 63488 00:13:36.714 }, 00:13:36.714 { 00:13:36.714 "name": "BaseBdev4", 00:13:36.714 "uuid": "fc5685d9-3781-46bb-a1f3-a84ce4070538", 00:13:36.714 "is_configured": true, 00:13:36.714 "data_offset": 2048, 00:13:36.714 "data_size": 63488 00:13:36.714 } 00:13:36.714 ] 00:13:36.714 }' 00:13:36.714 21:40:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:36.714 21:40:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:37.281 21:40:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:37.281 21:40:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.281 21:40:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:37.281 21:40:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:37.281 21:40:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.281 21:40:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:13:37.281 21:40:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:13:37.281 21:40:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.281 21:40:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:37.281 [2024-12-10 21:40:37.849978] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:37.281 21:40:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.281 21:40:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:37.281 21:40:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:37.281 21:40:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:37.281 21:40:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:37.281 21:40:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:37.281 21:40:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:37.281 21:40:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:37.281 21:40:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:37.281 21:40:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:37.281 21:40:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:37.281 21:40:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:37.281 21:40:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:37.281 21:40:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.281 21:40:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:37.281 21:40:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.281 21:40:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:37.281 "name": "Existed_Raid", 00:13:37.281 "uuid": "844d2c9c-0ced-40da-97ec-d02133480b89", 00:13:37.281 "strip_size_kb": 0, 00:13:37.281 "state": "configuring", 00:13:37.281 "raid_level": "raid1", 00:13:37.281 "superblock": true, 00:13:37.281 "num_base_bdevs": 4, 00:13:37.281 "num_base_bdevs_discovered": 3, 00:13:37.281 "num_base_bdevs_operational": 4, 00:13:37.281 "base_bdevs_list": [ 00:13:37.281 { 00:13:37.281 "name": "BaseBdev1", 00:13:37.281 "uuid": "55d1a0cb-33da-4b80-b2aa-760b16c0d204", 00:13:37.281 "is_configured": true, 00:13:37.281 "data_offset": 2048, 00:13:37.281 "data_size": 63488 00:13:37.281 }, 00:13:37.281 { 00:13:37.281 "name": null, 00:13:37.281 "uuid": "ea8ac440-344d-49fa-97b8-cc4edf9f3f9f", 00:13:37.281 "is_configured": false, 00:13:37.281 "data_offset": 0, 00:13:37.281 "data_size": 63488 00:13:37.281 }, 00:13:37.281 { 00:13:37.281 "name": "BaseBdev3", 00:13:37.281 "uuid": "506c9bcd-5920-41f8-bd2f-f161b37db383", 00:13:37.281 "is_configured": true, 00:13:37.281 "data_offset": 2048, 00:13:37.281 "data_size": 63488 00:13:37.281 }, 00:13:37.281 { 00:13:37.281 "name": "BaseBdev4", 00:13:37.281 "uuid": "fc5685d9-3781-46bb-a1f3-a84ce4070538", 00:13:37.281 "is_configured": true, 00:13:37.281 "data_offset": 2048, 00:13:37.281 "data_size": 63488 00:13:37.281 } 00:13:37.281 ] 00:13:37.281 }' 00:13:37.281 21:40:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:37.281 21:40:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:37.850 21:40:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:37.850 21:40:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.850 21:40:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:37.850 21:40:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:37.850 21:40:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.850 21:40:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:13:37.850 21:40:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:37.850 21:40:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.850 21:40:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:37.850 [2024-12-10 21:40:38.377172] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:37.850 21:40:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.850 21:40:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:37.850 21:40:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:37.850 21:40:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:37.850 21:40:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:37.850 21:40:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:37.850 21:40:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:37.850 21:40:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:37.850 21:40:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:37.850 21:40:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:37.850 21:40:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:37.850 21:40:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:37.850 21:40:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:37.850 21:40:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.850 21:40:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:37.850 21:40:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.850 21:40:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:37.850 "name": "Existed_Raid", 00:13:37.850 "uuid": "844d2c9c-0ced-40da-97ec-d02133480b89", 00:13:37.850 "strip_size_kb": 0, 00:13:37.850 "state": "configuring", 00:13:37.850 "raid_level": "raid1", 00:13:37.850 "superblock": true, 00:13:37.850 "num_base_bdevs": 4, 00:13:37.850 "num_base_bdevs_discovered": 2, 00:13:37.850 "num_base_bdevs_operational": 4, 00:13:37.850 "base_bdevs_list": [ 00:13:37.850 { 00:13:37.850 "name": null, 00:13:37.850 "uuid": "55d1a0cb-33da-4b80-b2aa-760b16c0d204", 00:13:37.850 "is_configured": false, 00:13:37.850 "data_offset": 0, 00:13:37.850 "data_size": 63488 00:13:37.850 }, 00:13:37.850 { 00:13:37.850 "name": null, 00:13:37.850 "uuid": "ea8ac440-344d-49fa-97b8-cc4edf9f3f9f", 00:13:37.850 "is_configured": false, 00:13:37.850 "data_offset": 0, 00:13:37.850 "data_size": 63488 00:13:37.850 }, 00:13:37.850 { 00:13:37.850 "name": "BaseBdev3", 00:13:37.850 "uuid": "506c9bcd-5920-41f8-bd2f-f161b37db383", 00:13:37.850 "is_configured": true, 00:13:37.850 "data_offset": 2048, 00:13:37.850 "data_size": 63488 00:13:37.850 }, 00:13:37.850 { 00:13:37.850 "name": "BaseBdev4", 00:13:37.850 "uuid": "fc5685d9-3781-46bb-a1f3-a84ce4070538", 00:13:37.850 "is_configured": true, 00:13:37.850 "data_offset": 2048, 00:13:37.850 "data_size": 63488 00:13:37.850 } 00:13:37.850 ] 00:13:37.850 }' 00:13:37.850 21:40:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:37.850 21:40:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:38.418 21:40:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:38.418 21:40:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.418 21:40:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:38.418 21:40:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:38.418 21:40:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.418 21:40:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:13:38.418 21:40:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:13:38.418 21:40:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.418 21:40:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:38.418 [2024-12-10 21:40:39.006736] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:38.418 21:40:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.418 21:40:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:38.418 21:40:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:38.418 21:40:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:38.418 21:40:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:38.418 21:40:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:38.418 21:40:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:38.418 21:40:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:38.418 21:40:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:38.418 21:40:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:38.418 21:40:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:38.418 21:40:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:38.418 21:40:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:38.418 21:40:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.418 21:40:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:38.418 21:40:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.418 21:40:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:38.418 "name": "Existed_Raid", 00:13:38.418 "uuid": "844d2c9c-0ced-40da-97ec-d02133480b89", 00:13:38.418 "strip_size_kb": 0, 00:13:38.418 "state": "configuring", 00:13:38.418 "raid_level": "raid1", 00:13:38.418 "superblock": true, 00:13:38.418 "num_base_bdevs": 4, 00:13:38.418 "num_base_bdevs_discovered": 3, 00:13:38.418 "num_base_bdevs_operational": 4, 00:13:38.418 "base_bdevs_list": [ 00:13:38.418 { 00:13:38.418 "name": null, 00:13:38.418 "uuid": "55d1a0cb-33da-4b80-b2aa-760b16c0d204", 00:13:38.418 "is_configured": false, 00:13:38.418 "data_offset": 0, 00:13:38.418 "data_size": 63488 00:13:38.418 }, 00:13:38.418 { 00:13:38.418 "name": "BaseBdev2", 00:13:38.418 "uuid": "ea8ac440-344d-49fa-97b8-cc4edf9f3f9f", 00:13:38.418 "is_configured": true, 00:13:38.418 "data_offset": 2048, 00:13:38.418 "data_size": 63488 00:13:38.418 }, 00:13:38.418 { 00:13:38.418 "name": "BaseBdev3", 00:13:38.418 "uuid": "506c9bcd-5920-41f8-bd2f-f161b37db383", 00:13:38.418 "is_configured": true, 00:13:38.418 "data_offset": 2048, 00:13:38.418 "data_size": 63488 00:13:38.418 }, 00:13:38.418 { 00:13:38.418 "name": "BaseBdev4", 00:13:38.418 "uuid": "fc5685d9-3781-46bb-a1f3-a84ce4070538", 00:13:38.418 "is_configured": true, 00:13:38.418 "data_offset": 2048, 00:13:38.418 "data_size": 63488 00:13:38.418 } 00:13:38.418 ] 00:13:38.418 }' 00:13:38.418 21:40:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:38.418 21:40:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:38.677 21:40:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:38.677 21:40:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.677 21:40:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:38.677 21:40:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:38.677 21:40:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.936 21:40:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:13:38.936 21:40:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:38.936 21:40:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.936 21:40:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:38.936 21:40:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:13:38.936 21:40:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.936 21:40:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 55d1a0cb-33da-4b80-b2aa-760b16c0d204 00:13:38.937 21:40:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.937 21:40:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:38.937 [2024-12-10 21:40:39.556668] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:13:38.937 [2024-12-10 21:40:39.557003] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:13:38.937 [2024-12-10 21:40:39.557059] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:38.937 [2024-12-10 21:40:39.557361] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:13:38.937 [2024-12-10 21:40:39.557577] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:13:38.937 NewBaseBdev 00:13:38.937 [2024-12-10 21:40:39.557622] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:13:38.937 [2024-12-10 21:40:39.557778] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:38.937 21:40:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.937 21:40:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:13:38.937 21:40:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:13:38.937 21:40:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:38.937 21:40:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:38.937 21:40:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:38.937 21:40:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:38.937 21:40:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:38.937 21:40:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.937 21:40:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:38.937 21:40:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.937 21:40:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:13:38.937 21:40:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.937 21:40:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:38.937 [ 00:13:38.937 { 00:13:38.937 "name": "NewBaseBdev", 00:13:38.937 "aliases": [ 00:13:38.937 "55d1a0cb-33da-4b80-b2aa-760b16c0d204" 00:13:38.937 ], 00:13:38.937 "product_name": "Malloc disk", 00:13:38.937 "block_size": 512, 00:13:38.937 "num_blocks": 65536, 00:13:38.937 "uuid": "55d1a0cb-33da-4b80-b2aa-760b16c0d204", 00:13:38.937 "assigned_rate_limits": { 00:13:38.937 "rw_ios_per_sec": 0, 00:13:38.937 "rw_mbytes_per_sec": 0, 00:13:38.937 "r_mbytes_per_sec": 0, 00:13:38.937 "w_mbytes_per_sec": 0 00:13:38.937 }, 00:13:38.937 "claimed": true, 00:13:38.937 "claim_type": "exclusive_write", 00:13:38.937 "zoned": false, 00:13:38.937 "supported_io_types": { 00:13:38.937 "read": true, 00:13:38.937 "write": true, 00:13:38.937 "unmap": true, 00:13:38.937 "flush": true, 00:13:38.937 "reset": true, 00:13:38.937 "nvme_admin": false, 00:13:38.937 "nvme_io": false, 00:13:38.937 "nvme_io_md": false, 00:13:38.937 "write_zeroes": true, 00:13:38.937 "zcopy": true, 00:13:38.937 "get_zone_info": false, 00:13:38.937 "zone_management": false, 00:13:38.937 "zone_append": false, 00:13:38.937 "compare": false, 00:13:38.937 "compare_and_write": false, 00:13:38.937 "abort": true, 00:13:38.937 "seek_hole": false, 00:13:38.937 "seek_data": false, 00:13:38.937 "copy": true, 00:13:38.937 "nvme_iov_md": false 00:13:38.937 }, 00:13:38.937 "memory_domains": [ 00:13:38.937 { 00:13:38.937 "dma_device_id": "system", 00:13:38.937 "dma_device_type": 1 00:13:38.937 }, 00:13:38.937 { 00:13:38.937 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:38.937 "dma_device_type": 2 00:13:38.937 } 00:13:38.937 ], 00:13:38.937 "driver_specific": {} 00:13:38.937 } 00:13:38.937 ] 00:13:38.937 21:40:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.937 21:40:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:38.937 21:40:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:13:38.937 21:40:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:38.937 21:40:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:38.937 21:40:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:38.937 21:40:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:38.937 21:40:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:38.937 21:40:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:38.937 21:40:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:38.937 21:40:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:38.937 21:40:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:38.937 21:40:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:38.937 21:40:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:38.937 21:40:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.937 21:40:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:38.937 21:40:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.937 21:40:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:38.937 "name": "Existed_Raid", 00:13:38.937 "uuid": "844d2c9c-0ced-40da-97ec-d02133480b89", 00:13:38.937 "strip_size_kb": 0, 00:13:38.937 "state": "online", 00:13:38.937 "raid_level": "raid1", 00:13:38.937 "superblock": true, 00:13:38.937 "num_base_bdevs": 4, 00:13:38.937 "num_base_bdevs_discovered": 4, 00:13:38.937 "num_base_bdevs_operational": 4, 00:13:38.937 "base_bdevs_list": [ 00:13:38.937 { 00:13:38.937 "name": "NewBaseBdev", 00:13:38.937 "uuid": "55d1a0cb-33da-4b80-b2aa-760b16c0d204", 00:13:38.937 "is_configured": true, 00:13:38.937 "data_offset": 2048, 00:13:38.937 "data_size": 63488 00:13:38.937 }, 00:13:38.937 { 00:13:38.937 "name": "BaseBdev2", 00:13:38.937 "uuid": "ea8ac440-344d-49fa-97b8-cc4edf9f3f9f", 00:13:38.937 "is_configured": true, 00:13:38.937 "data_offset": 2048, 00:13:38.937 "data_size": 63488 00:13:38.937 }, 00:13:38.937 { 00:13:38.937 "name": "BaseBdev3", 00:13:38.937 "uuid": "506c9bcd-5920-41f8-bd2f-f161b37db383", 00:13:38.937 "is_configured": true, 00:13:38.937 "data_offset": 2048, 00:13:38.937 "data_size": 63488 00:13:38.937 }, 00:13:38.937 { 00:13:38.937 "name": "BaseBdev4", 00:13:38.937 "uuid": "fc5685d9-3781-46bb-a1f3-a84ce4070538", 00:13:38.937 "is_configured": true, 00:13:38.937 "data_offset": 2048, 00:13:38.937 "data_size": 63488 00:13:38.937 } 00:13:38.937 ] 00:13:38.937 }' 00:13:38.937 21:40:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:38.937 21:40:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:39.505 21:40:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:13:39.505 21:40:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:39.505 21:40:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:39.505 21:40:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:39.505 21:40:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:13:39.505 21:40:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:39.505 21:40:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:39.505 21:40:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.505 21:40:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:39.505 21:40:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:39.505 [2024-12-10 21:40:40.064190] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:39.505 21:40:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.505 21:40:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:39.505 "name": "Existed_Raid", 00:13:39.505 "aliases": [ 00:13:39.505 "844d2c9c-0ced-40da-97ec-d02133480b89" 00:13:39.506 ], 00:13:39.506 "product_name": "Raid Volume", 00:13:39.506 "block_size": 512, 00:13:39.506 "num_blocks": 63488, 00:13:39.506 "uuid": "844d2c9c-0ced-40da-97ec-d02133480b89", 00:13:39.506 "assigned_rate_limits": { 00:13:39.506 "rw_ios_per_sec": 0, 00:13:39.506 "rw_mbytes_per_sec": 0, 00:13:39.506 "r_mbytes_per_sec": 0, 00:13:39.506 "w_mbytes_per_sec": 0 00:13:39.506 }, 00:13:39.506 "claimed": false, 00:13:39.506 "zoned": false, 00:13:39.506 "supported_io_types": { 00:13:39.506 "read": true, 00:13:39.506 "write": true, 00:13:39.506 "unmap": false, 00:13:39.506 "flush": false, 00:13:39.506 "reset": true, 00:13:39.506 "nvme_admin": false, 00:13:39.506 "nvme_io": false, 00:13:39.506 "nvme_io_md": false, 00:13:39.506 "write_zeroes": true, 00:13:39.506 "zcopy": false, 00:13:39.506 "get_zone_info": false, 00:13:39.506 "zone_management": false, 00:13:39.506 "zone_append": false, 00:13:39.506 "compare": false, 00:13:39.506 "compare_and_write": false, 00:13:39.506 "abort": false, 00:13:39.506 "seek_hole": false, 00:13:39.506 "seek_data": false, 00:13:39.506 "copy": false, 00:13:39.506 "nvme_iov_md": false 00:13:39.506 }, 00:13:39.506 "memory_domains": [ 00:13:39.506 { 00:13:39.506 "dma_device_id": "system", 00:13:39.506 "dma_device_type": 1 00:13:39.506 }, 00:13:39.506 { 00:13:39.506 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:39.506 "dma_device_type": 2 00:13:39.506 }, 00:13:39.506 { 00:13:39.506 "dma_device_id": "system", 00:13:39.506 "dma_device_type": 1 00:13:39.506 }, 00:13:39.506 { 00:13:39.506 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:39.506 "dma_device_type": 2 00:13:39.506 }, 00:13:39.506 { 00:13:39.506 "dma_device_id": "system", 00:13:39.506 "dma_device_type": 1 00:13:39.506 }, 00:13:39.506 { 00:13:39.506 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:39.506 "dma_device_type": 2 00:13:39.506 }, 00:13:39.506 { 00:13:39.506 "dma_device_id": "system", 00:13:39.506 "dma_device_type": 1 00:13:39.506 }, 00:13:39.506 { 00:13:39.506 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:39.506 "dma_device_type": 2 00:13:39.506 } 00:13:39.506 ], 00:13:39.506 "driver_specific": { 00:13:39.506 "raid": { 00:13:39.506 "uuid": "844d2c9c-0ced-40da-97ec-d02133480b89", 00:13:39.506 "strip_size_kb": 0, 00:13:39.506 "state": "online", 00:13:39.506 "raid_level": "raid1", 00:13:39.506 "superblock": true, 00:13:39.506 "num_base_bdevs": 4, 00:13:39.506 "num_base_bdevs_discovered": 4, 00:13:39.506 "num_base_bdevs_operational": 4, 00:13:39.506 "base_bdevs_list": [ 00:13:39.506 { 00:13:39.506 "name": "NewBaseBdev", 00:13:39.506 "uuid": "55d1a0cb-33da-4b80-b2aa-760b16c0d204", 00:13:39.506 "is_configured": true, 00:13:39.506 "data_offset": 2048, 00:13:39.506 "data_size": 63488 00:13:39.506 }, 00:13:39.506 { 00:13:39.506 "name": "BaseBdev2", 00:13:39.506 "uuid": "ea8ac440-344d-49fa-97b8-cc4edf9f3f9f", 00:13:39.506 "is_configured": true, 00:13:39.506 "data_offset": 2048, 00:13:39.506 "data_size": 63488 00:13:39.506 }, 00:13:39.506 { 00:13:39.506 "name": "BaseBdev3", 00:13:39.506 "uuid": "506c9bcd-5920-41f8-bd2f-f161b37db383", 00:13:39.506 "is_configured": true, 00:13:39.506 "data_offset": 2048, 00:13:39.506 "data_size": 63488 00:13:39.506 }, 00:13:39.506 { 00:13:39.506 "name": "BaseBdev4", 00:13:39.506 "uuid": "fc5685d9-3781-46bb-a1f3-a84ce4070538", 00:13:39.506 "is_configured": true, 00:13:39.506 "data_offset": 2048, 00:13:39.506 "data_size": 63488 00:13:39.506 } 00:13:39.506 ] 00:13:39.506 } 00:13:39.506 } 00:13:39.506 }' 00:13:39.506 21:40:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:39.506 21:40:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:13:39.506 BaseBdev2 00:13:39.506 BaseBdev3 00:13:39.506 BaseBdev4' 00:13:39.506 21:40:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:39.506 21:40:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:39.506 21:40:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:39.506 21:40:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:39.506 21:40:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:13:39.506 21:40:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.506 21:40:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:39.506 21:40:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.506 21:40:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:39.506 21:40:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:39.506 21:40:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:39.506 21:40:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:39.506 21:40:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:39.506 21:40:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.506 21:40:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:39.506 21:40:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.764 21:40:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:39.764 21:40:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:39.764 21:40:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:39.764 21:40:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:39.764 21:40:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:39.764 21:40:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.764 21:40:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:39.764 21:40:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.764 21:40:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:39.764 21:40:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:39.764 21:40:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:39.764 21:40:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:13:39.764 21:40:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.764 21:40:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:39.764 21:40:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:39.764 21:40:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.765 21:40:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:39.765 21:40:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:39.765 21:40:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:39.765 21:40:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.765 21:40:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:39.765 [2024-12-10 21:40:40.403317] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:39.765 [2024-12-10 21:40:40.403349] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:39.765 [2024-12-10 21:40:40.403450] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:39.765 [2024-12-10 21:40:40.403782] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:39.765 [2024-12-10 21:40:40.403797] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:13:39.765 21:40:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.765 21:40:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 74010 00:13:39.765 21:40:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 74010 ']' 00:13:39.765 21:40:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 74010 00:13:39.765 21:40:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:13:39.765 21:40:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:39.765 21:40:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74010 00:13:39.765 21:40:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:39.765 21:40:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:39.765 21:40:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74010' 00:13:39.765 killing process with pid 74010 00:13:39.765 21:40:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 74010 00:13:39.765 [2024-12-10 21:40:40.451204] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:39.765 21:40:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 74010 00:13:40.332 [2024-12-10 21:40:40.853411] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:41.269 21:40:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:13:41.269 00:13:41.269 real 0m11.859s 00:13:41.269 user 0m18.887s 00:13:41.269 sys 0m2.135s 00:13:41.269 21:40:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:41.269 ************************************ 00:13:41.269 END TEST raid_state_function_test_sb 00:13:41.269 ************************************ 00:13:41.269 21:40:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:41.269 21:40:42 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 4 00:13:41.269 21:40:42 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:13:41.269 21:40:42 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:41.269 21:40:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:41.269 ************************************ 00:13:41.269 START TEST raid_superblock_test 00:13:41.269 ************************************ 00:13:41.269 21:40:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 4 00:13:41.527 21:40:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:13:41.527 21:40:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:13:41.527 21:40:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:13:41.527 21:40:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:13:41.527 21:40:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:13:41.527 21:40:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:13:41.527 21:40:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:13:41.527 21:40:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:13:41.527 21:40:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:13:41.527 21:40:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:13:41.527 21:40:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:13:41.527 21:40:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:13:41.527 21:40:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:13:41.527 21:40:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:13:41.527 21:40:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:13:41.527 21:40:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=74680 00:13:41.527 21:40:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:13:41.527 21:40:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 74680 00:13:41.527 21:40:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 74680 ']' 00:13:41.527 21:40:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:41.527 21:40:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:41.527 21:40:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:41.527 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:41.527 21:40:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:41.527 21:40:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.527 [2024-12-10 21:40:42.139418] Starting SPDK v25.01-pre git sha1 cec5ba284 / DPDK 24.03.0 initialization... 00:13:41.527 [2024-12-10 21:40:42.139642] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74680 ] 00:13:41.786 [2024-12-10 21:40:42.313568] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:41.786 [2024-12-10 21:40:42.431083] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:13:42.044 [2024-12-10 21:40:42.632951] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:42.044 [2024-12-10 21:40:42.633045] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:42.303 21:40:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:42.303 21:40:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:13:42.303 21:40:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:13:42.303 21:40:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:42.303 21:40:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:13:42.303 21:40:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:13:42.304 21:40:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:13:42.304 21:40:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:42.304 21:40:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:42.304 21:40:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:42.304 21:40:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:13:42.304 21:40:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.304 21:40:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.304 malloc1 00:13:42.304 21:40:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.304 21:40:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:42.304 21:40:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.304 21:40:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.304 [2024-12-10 21:40:43.005137] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:42.304 [2024-12-10 21:40:43.005269] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:42.304 [2024-12-10 21:40:43.005309] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:42.304 [2024-12-10 21:40:43.005337] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:42.304 [2024-12-10 21:40:43.007429] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:42.304 [2024-12-10 21:40:43.007480] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:42.304 pt1 00:13:42.304 21:40:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.304 21:40:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:42.304 21:40:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:42.304 21:40:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:13:42.304 21:40:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:13:42.304 21:40:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:13:42.304 21:40:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:42.304 21:40:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:42.304 21:40:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:42.304 21:40:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:13:42.304 21:40:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.304 21:40:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.304 malloc2 00:13:42.304 21:40:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.304 21:40:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:42.304 21:40:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.304 21:40:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.304 [2024-12-10 21:40:43.059780] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:42.304 [2024-12-10 21:40:43.059896] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:42.304 [2024-12-10 21:40:43.059939] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:42.304 [2024-12-10 21:40:43.059968] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:42.304 [2024-12-10 21:40:43.062303] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:42.304 [2024-12-10 21:40:43.062375] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:42.304 pt2 00:13:42.304 21:40:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.304 21:40:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:42.304 21:40:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:42.304 21:40:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:13:42.304 21:40:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:13:42.304 21:40:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:13:42.304 21:40:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:42.304 21:40:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:42.304 21:40:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:42.304 21:40:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:13:42.304 21:40:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.304 21:40:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.563 malloc3 00:13:42.563 21:40:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.563 21:40:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:42.563 21:40:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.563 21:40:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.563 [2024-12-10 21:40:43.132697] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:42.563 [2024-12-10 21:40:43.132806] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:42.563 [2024-12-10 21:40:43.132846] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:13:42.563 [2024-12-10 21:40:43.132876] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:42.563 [2024-12-10 21:40:43.134947] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:42.563 [2024-12-10 21:40:43.135020] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:42.563 pt3 00:13:42.563 21:40:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.563 21:40:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:42.563 21:40:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:42.563 21:40:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:13:42.563 21:40:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:13:42.563 21:40:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:13:42.563 21:40:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:42.563 21:40:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:42.563 21:40:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:42.563 21:40:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:13:42.563 21:40:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.563 21:40:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.563 malloc4 00:13:42.563 21:40:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.563 21:40:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:13:42.563 21:40:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.563 21:40:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.563 [2024-12-10 21:40:43.195271] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:13:42.563 [2024-12-10 21:40:43.195398] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:42.563 [2024-12-10 21:40:43.195453] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:13:42.563 [2024-12-10 21:40:43.195485] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:42.563 [2024-12-10 21:40:43.197821] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:42.563 [2024-12-10 21:40:43.197909] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:13:42.563 pt4 00:13:42.563 21:40:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.563 21:40:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:42.563 21:40:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:42.563 21:40:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:13:42.563 21:40:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.563 21:40:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.563 [2024-12-10 21:40:43.207274] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:42.563 [2024-12-10 21:40:43.209276] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:42.563 [2024-12-10 21:40:43.209341] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:42.563 [2024-12-10 21:40:43.209405] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:13:42.563 [2024-12-10 21:40:43.209618] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:42.563 [2024-12-10 21:40:43.209636] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:42.563 [2024-12-10 21:40:43.209895] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:13:42.563 [2024-12-10 21:40:43.210068] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:42.563 [2024-12-10 21:40:43.210083] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:42.563 [2024-12-10 21:40:43.210257] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:42.563 21:40:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.563 21:40:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:13:42.564 21:40:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:42.564 21:40:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:42.564 21:40:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:42.564 21:40:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:42.564 21:40:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:42.564 21:40:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:42.564 21:40:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:42.564 21:40:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:42.564 21:40:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:42.564 21:40:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:42.564 21:40:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:42.564 21:40:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.564 21:40:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.564 21:40:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.564 21:40:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:42.564 "name": "raid_bdev1", 00:13:42.564 "uuid": "474b431a-7d59-4675-be5c-bf979dc98a0c", 00:13:42.564 "strip_size_kb": 0, 00:13:42.564 "state": "online", 00:13:42.564 "raid_level": "raid1", 00:13:42.564 "superblock": true, 00:13:42.564 "num_base_bdevs": 4, 00:13:42.564 "num_base_bdevs_discovered": 4, 00:13:42.564 "num_base_bdevs_operational": 4, 00:13:42.564 "base_bdevs_list": [ 00:13:42.564 { 00:13:42.564 "name": "pt1", 00:13:42.564 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:42.564 "is_configured": true, 00:13:42.564 "data_offset": 2048, 00:13:42.564 "data_size": 63488 00:13:42.564 }, 00:13:42.564 { 00:13:42.564 "name": "pt2", 00:13:42.564 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:42.564 "is_configured": true, 00:13:42.564 "data_offset": 2048, 00:13:42.564 "data_size": 63488 00:13:42.564 }, 00:13:42.564 { 00:13:42.564 "name": "pt3", 00:13:42.564 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:42.564 "is_configured": true, 00:13:42.564 "data_offset": 2048, 00:13:42.564 "data_size": 63488 00:13:42.564 }, 00:13:42.564 { 00:13:42.564 "name": "pt4", 00:13:42.564 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:42.564 "is_configured": true, 00:13:42.564 "data_offset": 2048, 00:13:42.564 "data_size": 63488 00:13:42.564 } 00:13:42.564 ] 00:13:42.564 }' 00:13:42.564 21:40:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:42.564 21:40:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.132 21:40:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:13:43.132 21:40:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:13:43.132 21:40:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:43.132 21:40:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:43.132 21:40:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:43.132 21:40:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:43.132 21:40:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:43.132 21:40:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:43.132 21:40:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.132 21:40:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.132 [2024-12-10 21:40:43.634925] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:43.132 21:40:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.132 21:40:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:43.132 "name": "raid_bdev1", 00:13:43.132 "aliases": [ 00:13:43.132 "474b431a-7d59-4675-be5c-bf979dc98a0c" 00:13:43.132 ], 00:13:43.132 "product_name": "Raid Volume", 00:13:43.132 "block_size": 512, 00:13:43.132 "num_blocks": 63488, 00:13:43.132 "uuid": "474b431a-7d59-4675-be5c-bf979dc98a0c", 00:13:43.132 "assigned_rate_limits": { 00:13:43.132 "rw_ios_per_sec": 0, 00:13:43.132 "rw_mbytes_per_sec": 0, 00:13:43.132 "r_mbytes_per_sec": 0, 00:13:43.132 "w_mbytes_per_sec": 0 00:13:43.132 }, 00:13:43.132 "claimed": false, 00:13:43.132 "zoned": false, 00:13:43.132 "supported_io_types": { 00:13:43.132 "read": true, 00:13:43.132 "write": true, 00:13:43.132 "unmap": false, 00:13:43.132 "flush": false, 00:13:43.132 "reset": true, 00:13:43.132 "nvme_admin": false, 00:13:43.132 "nvme_io": false, 00:13:43.132 "nvme_io_md": false, 00:13:43.132 "write_zeroes": true, 00:13:43.132 "zcopy": false, 00:13:43.132 "get_zone_info": false, 00:13:43.132 "zone_management": false, 00:13:43.132 "zone_append": false, 00:13:43.132 "compare": false, 00:13:43.132 "compare_and_write": false, 00:13:43.132 "abort": false, 00:13:43.132 "seek_hole": false, 00:13:43.132 "seek_data": false, 00:13:43.132 "copy": false, 00:13:43.132 "nvme_iov_md": false 00:13:43.132 }, 00:13:43.132 "memory_domains": [ 00:13:43.132 { 00:13:43.132 "dma_device_id": "system", 00:13:43.132 "dma_device_type": 1 00:13:43.132 }, 00:13:43.132 { 00:13:43.132 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:43.132 "dma_device_type": 2 00:13:43.132 }, 00:13:43.132 { 00:13:43.132 "dma_device_id": "system", 00:13:43.132 "dma_device_type": 1 00:13:43.132 }, 00:13:43.132 { 00:13:43.132 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:43.132 "dma_device_type": 2 00:13:43.132 }, 00:13:43.132 { 00:13:43.132 "dma_device_id": "system", 00:13:43.132 "dma_device_type": 1 00:13:43.132 }, 00:13:43.132 { 00:13:43.132 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:43.132 "dma_device_type": 2 00:13:43.132 }, 00:13:43.132 { 00:13:43.132 "dma_device_id": "system", 00:13:43.132 "dma_device_type": 1 00:13:43.132 }, 00:13:43.132 { 00:13:43.132 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:43.132 "dma_device_type": 2 00:13:43.132 } 00:13:43.132 ], 00:13:43.132 "driver_specific": { 00:13:43.132 "raid": { 00:13:43.132 "uuid": "474b431a-7d59-4675-be5c-bf979dc98a0c", 00:13:43.132 "strip_size_kb": 0, 00:13:43.132 "state": "online", 00:13:43.132 "raid_level": "raid1", 00:13:43.132 "superblock": true, 00:13:43.132 "num_base_bdevs": 4, 00:13:43.132 "num_base_bdevs_discovered": 4, 00:13:43.132 "num_base_bdevs_operational": 4, 00:13:43.132 "base_bdevs_list": [ 00:13:43.132 { 00:13:43.132 "name": "pt1", 00:13:43.132 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:43.132 "is_configured": true, 00:13:43.132 "data_offset": 2048, 00:13:43.132 "data_size": 63488 00:13:43.132 }, 00:13:43.132 { 00:13:43.132 "name": "pt2", 00:13:43.132 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:43.132 "is_configured": true, 00:13:43.132 "data_offset": 2048, 00:13:43.132 "data_size": 63488 00:13:43.132 }, 00:13:43.132 { 00:13:43.132 "name": "pt3", 00:13:43.132 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:43.132 "is_configured": true, 00:13:43.132 "data_offset": 2048, 00:13:43.132 "data_size": 63488 00:13:43.132 }, 00:13:43.132 { 00:13:43.132 "name": "pt4", 00:13:43.132 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:43.132 "is_configured": true, 00:13:43.132 "data_offset": 2048, 00:13:43.132 "data_size": 63488 00:13:43.132 } 00:13:43.132 ] 00:13:43.132 } 00:13:43.132 } 00:13:43.132 }' 00:13:43.132 21:40:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:43.132 21:40:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:13:43.132 pt2 00:13:43.132 pt3 00:13:43.132 pt4' 00:13:43.132 21:40:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:43.132 21:40:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:43.132 21:40:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:43.132 21:40:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:13:43.132 21:40:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:43.132 21:40:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.132 21:40:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.132 21:40:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.132 21:40:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:43.132 21:40:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:43.132 21:40:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:43.132 21:40:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:13:43.132 21:40:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:43.132 21:40:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.132 21:40:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.132 21:40:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.132 21:40:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:43.132 21:40:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:43.132 21:40:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:43.132 21:40:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:13:43.132 21:40:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.132 21:40:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.132 21:40:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:43.132 21:40:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.132 21:40:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:43.132 21:40:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:43.132 21:40:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:43.133 21:40:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:43.133 21:40:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:13:43.133 21:40:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.133 21:40:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.392 21:40:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.392 21:40:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:43.392 21:40:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:43.392 21:40:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:43.392 21:40:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:13:43.392 21:40:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.392 21:40:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.392 [2024-12-10 21:40:43.958370] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:43.392 21:40:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.392 21:40:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=474b431a-7d59-4675-be5c-bf979dc98a0c 00:13:43.392 21:40:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 474b431a-7d59-4675-be5c-bf979dc98a0c ']' 00:13:43.392 21:40:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:43.392 21:40:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.392 21:40:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.392 [2024-12-10 21:40:44.001913] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:43.392 [2024-12-10 21:40:44.001947] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:43.392 [2024-12-10 21:40:44.002038] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:43.392 [2024-12-10 21:40:44.002143] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:43.392 [2024-12-10 21:40:44.002160] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:43.392 21:40:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.392 21:40:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:43.392 21:40:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.392 21:40:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.392 21:40:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:13:43.392 21:40:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.392 21:40:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:13:43.392 21:40:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:13:43.392 21:40:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:43.392 21:40:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:13:43.392 21:40:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.392 21:40:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.392 21:40:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.392 21:40:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:43.392 21:40:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:13:43.392 21:40:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.392 21:40:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.392 21:40:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.392 21:40:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:43.392 21:40:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:13:43.392 21:40:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.392 21:40:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.392 21:40:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.392 21:40:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:43.392 21:40:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:13:43.392 21:40:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.392 21:40:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.392 21:40:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.392 21:40:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:13:43.392 21:40:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.392 21:40:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.392 21:40:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:13:43.392 21:40:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.392 21:40:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:13:43.392 21:40:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:13:43.392 21:40:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:13:43.392 21:40:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:13:43.393 21:40:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:13:43.393 21:40:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:43.393 21:40:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:13:43.393 21:40:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:43.393 21:40:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:13:43.393 21:40:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.393 21:40:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.393 [2024-12-10 21:40:44.169642] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:13:43.393 [2024-12-10 21:40:44.171634] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:13:43.393 [2024-12-10 21:40:44.171776] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:13:43.393 [2024-12-10 21:40:44.171823] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:13:43.393 [2024-12-10 21:40:44.171884] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:13:43.393 [2024-12-10 21:40:44.171943] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:13:43.393 [2024-12-10 21:40:44.171964] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:13:43.393 [2024-12-10 21:40:44.171985] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:13:43.393 [2024-12-10 21:40:44.171998] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:43.393 [2024-12-10 21:40:44.172010] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:13:43.651 request: 00:13:43.651 { 00:13:43.651 "name": "raid_bdev1", 00:13:43.651 "raid_level": "raid1", 00:13:43.651 "base_bdevs": [ 00:13:43.651 "malloc1", 00:13:43.651 "malloc2", 00:13:43.651 "malloc3", 00:13:43.651 "malloc4" 00:13:43.651 ], 00:13:43.651 "superblock": false, 00:13:43.651 "method": "bdev_raid_create", 00:13:43.651 "req_id": 1 00:13:43.651 } 00:13:43.651 Got JSON-RPC error response 00:13:43.651 response: 00:13:43.651 { 00:13:43.651 "code": -17, 00:13:43.651 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:13:43.651 } 00:13:43.651 21:40:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:13:43.651 21:40:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:13:43.651 21:40:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:43.651 21:40:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:43.651 21:40:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:43.651 21:40:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:43.651 21:40:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:13:43.651 21:40:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.651 21:40:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.651 21:40:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.651 21:40:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:13:43.651 21:40:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:13:43.651 21:40:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:43.651 21:40:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.652 21:40:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.652 [2024-12-10 21:40:44.229519] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:43.652 [2024-12-10 21:40:44.229651] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:43.652 [2024-12-10 21:40:44.229703] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:13:43.652 [2024-12-10 21:40:44.229741] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:43.652 [2024-12-10 21:40:44.232091] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:43.652 [2024-12-10 21:40:44.232199] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:43.652 [2024-12-10 21:40:44.232343] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:13:43.652 [2024-12-10 21:40:44.232459] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:43.652 pt1 00:13:43.652 21:40:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.652 21:40:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:13:43.652 21:40:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:43.652 21:40:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:43.652 21:40:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:43.652 21:40:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:43.652 21:40:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:43.652 21:40:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:43.652 21:40:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:43.652 21:40:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:43.652 21:40:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:43.652 21:40:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:43.652 21:40:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:43.652 21:40:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.652 21:40:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.652 21:40:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.652 21:40:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:43.652 "name": "raid_bdev1", 00:13:43.652 "uuid": "474b431a-7d59-4675-be5c-bf979dc98a0c", 00:13:43.652 "strip_size_kb": 0, 00:13:43.652 "state": "configuring", 00:13:43.652 "raid_level": "raid1", 00:13:43.652 "superblock": true, 00:13:43.652 "num_base_bdevs": 4, 00:13:43.652 "num_base_bdevs_discovered": 1, 00:13:43.652 "num_base_bdevs_operational": 4, 00:13:43.652 "base_bdevs_list": [ 00:13:43.652 { 00:13:43.652 "name": "pt1", 00:13:43.652 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:43.652 "is_configured": true, 00:13:43.652 "data_offset": 2048, 00:13:43.652 "data_size": 63488 00:13:43.652 }, 00:13:43.652 { 00:13:43.652 "name": null, 00:13:43.652 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:43.652 "is_configured": false, 00:13:43.652 "data_offset": 2048, 00:13:43.652 "data_size": 63488 00:13:43.652 }, 00:13:43.652 { 00:13:43.652 "name": null, 00:13:43.652 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:43.652 "is_configured": false, 00:13:43.652 "data_offset": 2048, 00:13:43.652 "data_size": 63488 00:13:43.652 }, 00:13:43.652 { 00:13:43.652 "name": null, 00:13:43.652 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:43.652 "is_configured": false, 00:13:43.652 "data_offset": 2048, 00:13:43.652 "data_size": 63488 00:13:43.652 } 00:13:43.652 ] 00:13:43.652 }' 00:13:43.652 21:40:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:43.652 21:40:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.917 21:40:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:13:43.917 21:40:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:43.917 21:40:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.917 21:40:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.917 [2024-12-10 21:40:44.688739] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:43.917 [2024-12-10 21:40:44.688888] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:43.917 [2024-12-10 21:40:44.688915] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:13:43.917 [2024-12-10 21:40:44.688926] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:43.917 [2024-12-10 21:40:44.689423] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:43.917 [2024-12-10 21:40:44.689460] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:43.917 [2024-12-10 21:40:44.689552] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:43.917 [2024-12-10 21:40:44.689579] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:44.184 pt2 00:13:44.184 21:40:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.184 21:40:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:13:44.184 21:40:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.184 21:40:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.184 [2024-12-10 21:40:44.700705] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:13:44.184 21:40:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.184 21:40:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:13:44.184 21:40:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:44.184 21:40:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:44.184 21:40:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:44.184 21:40:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:44.184 21:40:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:44.184 21:40:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:44.184 21:40:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:44.184 21:40:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:44.184 21:40:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:44.184 21:40:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:44.184 21:40:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:44.184 21:40:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.184 21:40:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.184 21:40:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.184 21:40:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:44.184 "name": "raid_bdev1", 00:13:44.184 "uuid": "474b431a-7d59-4675-be5c-bf979dc98a0c", 00:13:44.184 "strip_size_kb": 0, 00:13:44.184 "state": "configuring", 00:13:44.184 "raid_level": "raid1", 00:13:44.184 "superblock": true, 00:13:44.184 "num_base_bdevs": 4, 00:13:44.184 "num_base_bdevs_discovered": 1, 00:13:44.184 "num_base_bdevs_operational": 4, 00:13:44.184 "base_bdevs_list": [ 00:13:44.184 { 00:13:44.184 "name": "pt1", 00:13:44.184 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:44.184 "is_configured": true, 00:13:44.184 "data_offset": 2048, 00:13:44.184 "data_size": 63488 00:13:44.184 }, 00:13:44.184 { 00:13:44.184 "name": null, 00:13:44.184 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:44.184 "is_configured": false, 00:13:44.184 "data_offset": 0, 00:13:44.184 "data_size": 63488 00:13:44.184 }, 00:13:44.184 { 00:13:44.184 "name": null, 00:13:44.184 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:44.184 "is_configured": false, 00:13:44.184 "data_offset": 2048, 00:13:44.184 "data_size": 63488 00:13:44.184 }, 00:13:44.184 { 00:13:44.184 "name": null, 00:13:44.184 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:44.184 "is_configured": false, 00:13:44.184 "data_offset": 2048, 00:13:44.184 "data_size": 63488 00:13:44.184 } 00:13:44.184 ] 00:13:44.184 }' 00:13:44.184 21:40:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:44.184 21:40:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.446 21:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:13:44.447 21:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:44.447 21:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:44.447 21:40:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.447 21:40:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.447 [2024-12-10 21:40:45.163937] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:44.447 [2024-12-10 21:40:45.164064] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:44.447 [2024-12-10 21:40:45.164103] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:13:44.447 [2024-12-10 21:40:45.164132] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:44.447 [2024-12-10 21:40:45.164614] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:44.447 [2024-12-10 21:40:45.164674] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:44.447 [2024-12-10 21:40:45.164790] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:44.447 [2024-12-10 21:40:45.164839] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:44.447 pt2 00:13:44.447 21:40:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.447 21:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:13:44.447 21:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:44.447 21:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:44.447 21:40:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.447 21:40:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.447 [2024-12-10 21:40:45.175883] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:44.447 [2024-12-10 21:40:45.175987] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:44.447 [2024-12-10 21:40:45.176021] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:13:44.447 [2024-12-10 21:40:45.176047] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:44.447 [2024-12-10 21:40:45.176468] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:44.447 [2024-12-10 21:40:45.176522] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:44.447 [2024-12-10 21:40:45.176616] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:13:44.447 [2024-12-10 21:40:45.176661] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:44.447 pt3 00:13:44.447 21:40:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.447 21:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:13:44.447 21:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:44.447 21:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:13:44.447 21:40:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.447 21:40:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.447 [2024-12-10 21:40:45.187841] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:13:44.447 [2024-12-10 21:40:45.187883] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:44.447 [2024-12-10 21:40:45.187898] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:13:44.447 [2024-12-10 21:40:45.187906] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:44.447 [2024-12-10 21:40:45.188246] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:44.447 [2024-12-10 21:40:45.188262] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:13:44.447 [2024-12-10 21:40:45.188316] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:13:44.447 [2024-12-10 21:40:45.188337] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:13:44.447 [2024-12-10 21:40:45.188490] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:13:44.447 [2024-12-10 21:40:45.188499] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:44.447 [2024-12-10 21:40:45.188715] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:13:44.447 [2024-12-10 21:40:45.188853] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:13:44.447 [2024-12-10 21:40:45.188865] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:13:44.447 [2024-12-10 21:40:45.188997] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:44.447 pt4 00:13:44.447 21:40:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.447 21:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:13:44.447 21:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:44.447 21:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:13:44.447 21:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:44.447 21:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:44.447 21:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:44.447 21:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:44.447 21:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:44.447 21:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:44.447 21:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:44.447 21:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:44.447 21:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:44.447 21:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:44.447 21:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:44.447 21:40:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.448 21:40:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.448 21:40:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.709 21:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:44.709 "name": "raid_bdev1", 00:13:44.709 "uuid": "474b431a-7d59-4675-be5c-bf979dc98a0c", 00:13:44.709 "strip_size_kb": 0, 00:13:44.709 "state": "online", 00:13:44.710 "raid_level": "raid1", 00:13:44.710 "superblock": true, 00:13:44.710 "num_base_bdevs": 4, 00:13:44.710 "num_base_bdevs_discovered": 4, 00:13:44.710 "num_base_bdevs_operational": 4, 00:13:44.710 "base_bdevs_list": [ 00:13:44.710 { 00:13:44.710 "name": "pt1", 00:13:44.710 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:44.710 "is_configured": true, 00:13:44.710 "data_offset": 2048, 00:13:44.710 "data_size": 63488 00:13:44.710 }, 00:13:44.710 { 00:13:44.710 "name": "pt2", 00:13:44.710 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:44.710 "is_configured": true, 00:13:44.710 "data_offset": 2048, 00:13:44.710 "data_size": 63488 00:13:44.710 }, 00:13:44.710 { 00:13:44.710 "name": "pt3", 00:13:44.710 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:44.710 "is_configured": true, 00:13:44.710 "data_offset": 2048, 00:13:44.710 "data_size": 63488 00:13:44.710 }, 00:13:44.710 { 00:13:44.710 "name": "pt4", 00:13:44.710 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:44.710 "is_configured": true, 00:13:44.710 "data_offset": 2048, 00:13:44.710 "data_size": 63488 00:13:44.710 } 00:13:44.710 ] 00:13:44.710 }' 00:13:44.710 21:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:44.710 21:40:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.969 21:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:13:44.969 21:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:13:44.969 21:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:44.969 21:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:44.969 21:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:44.969 21:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:44.969 21:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:44.969 21:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:44.969 21:40:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.969 21:40:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.969 [2024-12-10 21:40:45.651461] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:44.969 21:40:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.969 21:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:44.969 "name": "raid_bdev1", 00:13:44.969 "aliases": [ 00:13:44.969 "474b431a-7d59-4675-be5c-bf979dc98a0c" 00:13:44.969 ], 00:13:44.969 "product_name": "Raid Volume", 00:13:44.969 "block_size": 512, 00:13:44.969 "num_blocks": 63488, 00:13:44.969 "uuid": "474b431a-7d59-4675-be5c-bf979dc98a0c", 00:13:44.969 "assigned_rate_limits": { 00:13:44.969 "rw_ios_per_sec": 0, 00:13:44.969 "rw_mbytes_per_sec": 0, 00:13:44.969 "r_mbytes_per_sec": 0, 00:13:44.969 "w_mbytes_per_sec": 0 00:13:44.969 }, 00:13:44.969 "claimed": false, 00:13:44.969 "zoned": false, 00:13:44.969 "supported_io_types": { 00:13:44.969 "read": true, 00:13:44.969 "write": true, 00:13:44.969 "unmap": false, 00:13:44.969 "flush": false, 00:13:44.969 "reset": true, 00:13:44.969 "nvme_admin": false, 00:13:44.969 "nvme_io": false, 00:13:44.969 "nvme_io_md": false, 00:13:44.969 "write_zeroes": true, 00:13:44.969 "zcopy": false, 00:13:44.969 "get_zone_info": false, 00:13:44.969 "zone_management": false, 00:13:44.969 "zone_append": false, 00:13:44.969 "compare": false, 00:13:44.969 "compare_and_write": false, 00:13:44.969 "abort": false, 00:13:44.969 "seek_hole": false, 00:13:44.969 "seek_data": false, 00:13:44.969 "copy": false, 00:13:44.969 "nvme_iov_md": false 00:13:44.969 }, 00:13:44.969 "memory_domains": [ 00:13:44.969 { 00:13:44.969 "dma_device_id": "system", 00:13:44.969 "dma_device_type": 1 00:13:44.969 }, 00:13:44.969 { 00:13:44.969 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:44.969 "dma_device_type": 2 00:13:44.969 }, 00:13:44.969 { 00:13:44.969 "dma_device_id": "system", 00:13:44.969 "dma_device_type": 1 00:13:44.969 }, 00:13:44.969 { 00:13:44.969 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:44.969 "dma_device_type": 2 00:13:44.969 }, 00:13:44.969 { 00:13:44.969 "dma_device_id": "system", 00:13:44.969 "dma_device_type": 1 00:13:44.969 }, 00:13:44.969 { 00:13:44.969 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:44.969 "dma_device_type": 2 00:13:44.969 }, 00:13:44.969 { 00:13:44.969 "dma_device_id": "system", 00:13:44.969 "dma_device_type": 1 00:13:44.969 }, 00:13:44.969 { 00:13:44.969 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:44.969 "dma_device_type": 2 00:13:44.969 } 00:13:44.969 ], 00:13:44.969 "driver_specific": { 00:13:44.969 "raid": { 00:13:44.969 "uuid": "474b431a-7d59-4675-be5c-bf979dc98a0c", 00:13:44.969 "strip_size_kb": 0, 00:13:44.969 "state": "online", 00:13:44.969 "raid_level": "raid1", 00:13:44.969 "superblock": true, 00:13:44.969 "num_base_bdevs": 4, 00:13:44.969 "num_base_bdevs_discovered": 4, 00:13:44.969 "num_base_bdevs_operational": 4, 00:13:44.969 "base_bdevs_list": [ 00:13:44.969 { 00:13:44.969 "name": "pt1", 00:13:44.969 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:44.969 "is_configured": true, 00:13:44.969 "data_offset": 2048, 00:13:44.969 "data_size": 63488 00:13:44.969 }, 00:13:44.969 { 00:13:44.969 "name": "pt2", 00:13:44.969 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:44.969 "is_configured": true, 00:13:44.969 "data_offset": 2048, 00:13:44.969 "data_size": 63488 00:13:44.969 }, 00:13:44.969 { 00:13:44.970 "name": "pt3", 00:13:44.970 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:44.970 "is_configured": true, 00:13:44.970 "data_offset": 2048, 00:13:44.970 "data_size": 63488 00:13:44.970 }, 00:13:44.970 { 00:13:44.970 "name": "pt4", 00:13:44.970 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:44.970 "is_configured": true, 00:13:44.970 "data_offset": 2048, 00:13:44.970 "data_size": 63488 00:13:44.970 } 00:13:44.970 ] 00:13:44.970 } 00:13:44.970 } 00:13:44.970 }' 00:13:44.970 21:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:44.970 21:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:13:44.970 pt2 00:13:44.970 pt3 00:13:44.970 pt4' 00:13:44.970 21:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:45.229 21:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:45.229 21:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:45.229 21:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:45.229 21:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:13:45.229 21:40:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.229 21:40:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.229 21:40:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.229 21:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:45.229 21:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:45.229 21:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:45.229 21:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:13:45.229 21:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:45.229 21:40:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.229 21:40:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.229 21:40:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.229 21:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:45.229 21:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:45.229 21:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:45.229 21:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:45.229 21:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:13:45.229 21:40:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.229 21:40:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.229 21:40:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.229 21:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:45.229 21:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:45.229 21:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:45.229 21:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:13:45.229 21:40:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.229 21:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:45.229 21:40:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.229 21:40:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.229 21:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:45.229 21:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:45.229 21:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:45.229 21:40:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.229 21:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:13:45.229 21:40:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.229 [2024-12-10 21:40:46.002899] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:45.489 21:40:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.489 21:40:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 474b431a-7d59-4675-be5c-bf979dc98a0c '!=' 474b431a-7d59-4675-be5c-bf979dc98a0c ']' 00:13:45.489 21:40:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:13:45.489 21:40:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:45.489 21:40:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:13:45.489 21:40:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:13:45.489 21:40:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.489 21:40:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.489 [2024-12-10 21:40:46.050515] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:13:45.489 21:40:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.489 21:40:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:45.489 21:40:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:45.489 21:40:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:45.489 21:40:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:45.489 21:40:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:45.489 21:40:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:45.489 21:40:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:45.489 21:40:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:45.489 21:40:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:45.489 21:40:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:45.489 21:40:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:45.489 21:40:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:45.489 21:40:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.489 21:40:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.489 21:40:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.489 21:40:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:45.489 "name": "raid_bdev1", 00:13:45.489 "uuid": "474b431a-7d59-4675-be5c-bf979dc98a0c", 00:13:45.489 "strip_size_kb": 0, 00:13:45.489 "state": "online", 00:13:45.489 "raid_level": "raid1", 00:13:45.489 "superblock": true, 00:13:45.489 "num_base_bdevs": 4, 00:13:45.489 "num_base_bdevs_discovered": 3, 00:13:45.489 "num_base_bdevs_operational": 3, 00:13:45.489 "base_bdevs_list": [ 00:13:45.489 { 00:13:45.489 "name": null, 00:13:45.489 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:45.489 "is_configured": false, 00:13:45.489 "data_offset": 0, 00:13:45.489 "data_size": 63488 00:13:45.489 }, 00:13:45.489 { 00:13:45.489 "name": "pt2", 00:13:45.489 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:45.489 "is_configured": true, 00:13:45.489 "data_offset": 2048, 00:13:45.489 "data_size": 63488 00:13:45.489 }, 00:13:45.489 { 00:13:45.489 "name": "pt3", 00:13:45.489 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:45.489 "is_configured": true, 00:13:45.489 "data_offset": 2048, 00:13:45.489 "data_size": 63488 00:13:45.489 }, 00:13:45.489 { 00:13:45.489 "name": "pt4", 00:13:45.489 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:45.489 "is_configured": true, 00:13:45.489 "data_offset": 2048, 00:13:45.489 "data_size": 63488 00:13:45.489 } 00:13:45.489 ] 00:13:45.489 }' 00:13:45.489 21:40:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:45.489 21:40:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.057 21:40:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:46.057 21:40:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.057 21:40:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.057 [2024-12-10 21:40:46.545586] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:46.057 [2024-12-10 21:40:46.545620] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:46.057 [2024-12-10 21:40:46.545704] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:46.058 [2024-12-10 21:40:46.545786] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:46.058 [2024-12-10 21:40:46.545796] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:13:46.058 21:40:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.058 21:40:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:46.058 21:40:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.058 21:40:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.058 21:40:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:13:46.058 21:40:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.058 21:40:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:13:46.058 21:40:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:13:46.058 21:40:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:13:46.058 21:40:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:13:46.058 21:40:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:13:46.058 21:40:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.058 21:40:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.058 21:40:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.058 21:40:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:13:46.058 21:40:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:13:46.058 21:40:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:13:46.058 21:40:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.058 21:40:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.058 21:40:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.058 21:40:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:13:46.058 21:40:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:13:46.058 21:40:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:13:46.058 21:40:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.058 21:40:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.058 21:40:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.058 21:40:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:13:46.058 21:40:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:13:46.058 21:40:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:13:46.058 21:40:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:13:46.058 21:40:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:46.058 21:40:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.058 21:40:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.058 [2024-12-10 21:40:46.645415] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:46.058 [2024-12-10 21:40:46.645479] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:46.058 [2024-12-10 21:40:46.645515] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:13:46.058 [2024-12-10 21:40:46.645524] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:46.058 [2024-12-10 21:40:46.647719] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:46.058 [2024-12-10 21:40:46.647755] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:46.058 [2024-12-10 21:40:46.647841] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:46.058 [2024-12-10 21:40:46.647884] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:46.058 pt2 00:13:46.058 21:40:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.058 21:40:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:13:46.058 21:40:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:46.058 21:40:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:46.058 21:40:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:46.058 21:40:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:46.058 21:40:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:46.058 21:40:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:46.058 21:40:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:46.058 21:40:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:46.058 21:40:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:46.058 21:40:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:46.058 21:40:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:46.058 21:40:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.058 21:40:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.058 21:40:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.058 21:40:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:46.058 "name": "raid_bdev1", 00:13:46.058 "uuid": "474b431a-7d59-4675-be5c-bf979dc98a0c", 00:13:46.058 "strip_size_kb": 0, 00:13:46.058 "state": "configuring", 00:13:46.058 "raid_level": "raid1", 00:13:46.058 "superblock": true, 00:13:46.058 "num_base_bdevs": 4, 00:13:46.058 "num_base_bdevs_discovered": 1, 00:13:46.058 "num_base_bdevs_operational": 3, 00:13:46.058 "base_bdevs_list": [ 00:13:46.058 { 00:13:46.058 "name": null, 00:13:46.058 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:46.058 "is_configured": false, 00:13:46.058 "data_offset": 2048, 00:13:46.058 "data_size": 63488 00:13:46.058 }, 00:13:46.058 { 00:13:46.058 "name": "pt2", 00:13:46.058 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:46.058 "is_configured": true, 00:13:46.058 "data_offset": 2048, 00:13:46.058 "data_size": 63488 00:13:46.058 }, 00:13:46.058 { 00:13:46.058 "name": null, 00:13:46.058 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:46.058 "is_configured": false, 00:13:46.058 "data_offset": 2048, 00:13:46.058 "data_size": 63488 00:13:46.058 }, 00:13:46.058 { 00:13:46.058 "name": null, 00:13:46.058 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:46.058 "is_configured": false, 00:13:46.058 "data_offset": 2048, 00:13:46.058 "data_size": 63488 00:13:46.058 } 00:13:46.058 ] 00:13:46.058 }' 00:13:46.058 21:40:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:46.058 21:40:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.317 21:40:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:13:46.317 21:40:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:13:46.318 21:40:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:46.318 21:40:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.318 21:40:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.318 [2024-12-10 21:40:47.084711] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:46.318 [2024-12-10 21:40:47.084842] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:46.318 [2024-12-10 21:40:47.084886] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:13:46.318 [2024-12-10 21:40:47.084920] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:46.318 [2024-12-10 21:40:47.085467] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:46.318 [2024-12-10 21:40:47.085531] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:46.318 [2024-12-10 21:40:47.085658] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:13:46.318 [2024-12-10 21:40:47.085713] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:46.318 pt3 00:13:46.318 21:40:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.318 21:40:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:13:46.318 21:40:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:46.318 21:40:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:46.318 21:40:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:46.318 21:40:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:46.318 21:40:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:46.318 21:40:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:46.318 21:40:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:46.318 21:40:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:46.318 21:40:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:46.318 21:40:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:46.318 21:40:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:46.318 21:40:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.577 21:40:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.577 21:40:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.577 21:40:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:46.577 "name": "raid_bdev1", 00:13:46.577 "uuid": "474b431a-7d59-4675-be5c-bf979dc98a0c", 00:13:46.577 "strip_size_kb": 0, 00:13:46.577 "state": "configuring", 00:13:46.577 "raid_level": "raid1", 00:13:46.577 "superblock": true, 00:13:46.577 "num_base_bdevs": 4, 00:13:46.577 "num_base_bdevs_discovered": 2, 00:13:46.577 "num_base_bdevs_operational": 3, 00:13:46.577 "base_bdevs_list": [ 00:13:46.577 { 00:13:46.577 "name": null, 00:13:46.577 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:46.577 "is_configured": false, 00:13:46.577 "data_offset": 2048, 00:13:46.577 "data_size": 63488 00:13:46.577 }, 00:13:46.577 { 00:13:46.577 "name": "pt2", 00:13:46.577 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:46.577 "is_configured": true, 00:13:46.577 "data_offset": 2048, 00:13:46.577 "data_size": 63488 00:13:46.577 }, 00:13:46.577 { 00:13:46.577 "name": "pt3", 00:13:46.577 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:46.577 "is_configured": true, 00:13:46.577 "data_offset": 2048, 00:13:46.577 "data_size": 63488 00:13:46.577 }, 00:13:46.577 { 00:13:46.577 "name": null, 00:13:46.577 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:46.577 "is_configured": false, 00:13:46.577 "data_offset": 2048, 00:13:46.577 "data_size": 63488 00:13:46.577 } 00:13:46.577 ] 00:13:46.577 }' 00:13:46.577 21:40:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:46.577 21:40:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.836 21:40:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:13:46.836 21:40:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:13:46.836 21:40:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:13:46.836 21:40:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:13:46.836 21:40:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.836 21:40:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.836 [2024-12-10 21:40:47.531975] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:13:46.836 [2024-12-10 21:40:47.532053] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:46.836 [2024-12-10 21:40:47.532082] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:13:46.836 [2024-12-10 21:40:47.532093] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:46.836 [2024-12-10 21:40:47.532620] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:46.836 [2024-12-10 21:40:47.532641] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:13:46.836 [2024-12-10 21:40:47.532741] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:13:46.836 [2024-12-10 21:40:47.532774] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:13:46.836 [2024-12-10 21:40:47.532933] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:13:46.836 [2024-12-10 21:40:47.532943] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:46.836 [2024-12-10 21:40:47.533217] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:13:46.836 [2024-12-10 21:40:47.533396] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:13:46.836 [2024-12-10 21:40:47.533411] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:13:46.836 [2024-12-10 21:40:47.533678] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:46.836 pt4 00:13:46.836 21:40:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.836 21:40:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:46.836 21:40:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:46.836 21:40:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:46.836 21:40:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:46.836 21:40:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:46.836 21:40:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:46.836 21:40:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:46.836 21:40:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:46.836 21:40:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:46.836 21:40:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:46.836 21:40:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:46.836 21:40:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.836 21:40:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:46.836 21:40:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.836 21:40:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.837 21:40:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:46.837 "name": "raid_bdev1", 00:13:46.837 "uuid": "474b431a-7d59-4675-be5c-bf979dc98a0c", 00:13:46.837 "strip_size_kb": 0, 00:13:46.837 "state": "online", 00:13:46.837 "raid_level": "raid1", 00:13:46.837 "superblock": true, 00:13:46.837 "num_base_bdevs": 4, 00:13:46.837 "num_base_bdevs_discovered": 3, 00:13:46.837 "num_base_bdevs_operational": 3, 00:13:46.837 "base_bdevs_list": [ 00:13:46.837 { 00:13:46.837 "name": null, 00:13:46.837 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:46.837 "is_configured": false, 00:13:46.837 "data_offset": 2048, 00:13:46.837 "data_size": 63488 00:13:46.837 }, 00:13:46.837 { 00:13:46.837 "name": "pt2", 00:13:46.837 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:46.837 "is_configured": true, 00:13:46.837 "data_offset": 2048, 00:13:46.837 "data_size": 63488 00:13:46.837 }, 00:13:46.837 { 00:13:46.837 "name": "pt3", 00:13:46.837 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:46.837 "is_configured": true, 00:13:46.837 "data_offset": 2048, 00:13:46.837 "data_size": 63488 00:13:46.837 }, 00:13:46.837 { 00:13:46.837 "name": "pt4", 00:13:46.837 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:46.837 "is_configured": true, 00:13:46.837 "data_offset": 2048, 00:13:46.837 "data_size": 63488 00:13:46.837 } 00:13:46.837 ] 00:13:46.837 }' 00:13:46.837 21:40:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:46.837 21:40:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.406 21:40:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:47.406 21:40:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.406 21:40:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.406 [2024-12-10 21:40:47.971174] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:47.406 [2024-12-10 21:40:47.971212] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:47.406 [2024-12-10 21:40:47.971296] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:47.406 [2024-12-10 21:40:47.971373] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:47.406 [2024-12-10 21:40:47.971387] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:13:47.406 21:40:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.406 21:40:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:13:47.406 21:40:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:47.406 21:40:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.406 21:40:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.406 21:40:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.406 21:40:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:13:47.406 21:40:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:13:47.406 21:40:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:13:47.406 21:40:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:13:47.406 21:40:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:13:47.406 21:40:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.406 21:40:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.406 21:40:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.406 21:40:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:47.406 21:40:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.406 21:40:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.406 [2024-12-10 21:40:48.035067] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:47.406 [2024-12-10 21:40:48.035140] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:47.406 [2024-12-10 21:40:48.035164] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:13:47.406 [2024-12-10 21:40:48.035179] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:47.406 [2024-12-10 21:40:48.037711] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:47.406 [2024-12-10 21:40:48.037748] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:47.406 [2024-12-10 21:40:48.037835] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:13:47.406 [2024-12-10 21:40:48.037884] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:47.406 [2024-12-10 21:40:48.038071] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:13:47.406 [2024-12-10 21:40:48.038095] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:47.406 [2024-12-10 21:40:48.038111] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:13:47.406 [2024-12-10 21:40:48.038183] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:47.406 [2024-12-10 21:40:48.038291] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:47.406 pt1 00:13:47.406 21:40:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.406 21:40:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:13:47.406 21:40:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:13:47.406 21:40:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:47.406 21:40:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:47.406 21:40:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:47.406 21:40:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:47.406 21:40:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:47.406 21:40:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:47.406 21:40:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:47.406 21:40:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:47.406 21:40:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:47.406 21:40:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:47.406 21:40:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.406 21:40:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.406 21:40:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:47.406 21:40:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.406 21:40:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:47.407 "name": "raid_bdev1", 00:13:47.407 "uuid": "474b431a-7d59-4675-be5c-bf979dc98a0c", 00:13:47.407 "strip_size_kb": 0, 00:13:47.407 "state": "configuring", 00:13:47.407 "raid_level": "raid1", 00:13:47.407 "superblock": true, 00:13:47.407 "num_base_bdevs": 4, 00:13:47.407 "num_base_bdevs_discovered": 2, 00:13:47.407 "num_base_bdevs_operational": 3, 00:13:47.407 "base_bdevs_list": [ 00:13:47.407 { 00:13:47.407 "name": null, 00:13:47.407 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:47.407 "is_configured": false, 00:13:47.407 "data_offset": 2048, 00:13:47.407 "data_size": 63488 00:13:47.407 }, 00:13:47.407 { 00:13:47.407 "name": "pt2", 00:13:47.407 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:47.407 "is_configured": true, 00:13:47.407 "data_offset": 2048, 00:13:47.407 "data_size": 63488 00:13:47.407 }, 00:13:47.407 { 00:13:47.407 "name": "pt3", 00:13:47.407 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:47.407 "is_configured": true, 00:13:47.407 "data_offset": 2048, 00:13:47.407 "data_size": 63488 00:13:47.407 }, 00:13:47.407 { 00:13:47.407 "name": null, 00:13:47.407 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:47.407 "is_configured": false, 00:13:47.407 "data_offset": 2048, 00:13:47.407 "data_size": 63488 00:13:47.407 } 00:13:47.407 ] 00:13:47.407 }' 00:13:47.407 21:40:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:47.407 21:40:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.975 21:40:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:13:47.975 21:40:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.975 21:40:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.975 21:40:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:13:47.975 21:40:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.975 21:40:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:13:47.975 21:40:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:13:47.975 21:40:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.975 21:40:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.975 [2024-12-10 21:40:48.586170] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:13:47.975 [2024-12-10 21:40:48.586237] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:47.975 [2024-12-10 21:40:48.586260] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:13:47.975 [2024-12-10 21:40:48.586269] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:47.975 [2024-12-10 21:40:48.586719] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:47.975 [2024-12-10 21:40:48.586738] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:13:47.975 [2024-12-10 21:40:48.586825] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:13:47.975 [2024-12-10 21:40:48.586847] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:13:47.975 [2024-12-10 21:40:48.586982] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:13:47.975 [2024-12-10 21:40:48.586991] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:47.975 [2024-12-10 21:40:48.587265] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:13:47.975 [2024-12-10 21:40:48.587446] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:13:47.975 [2024-12-10 21:40:48.587463] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:13:47.975 [2024-12-10 21:40:48.587614] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:47.975 pt4 00:13:47.975 21:40:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.975 21:40:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:47.975 21:40:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:47.975 21:40:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:47.975 21:40:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:47.975 21:40:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:47.975 21:40:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:47.975 21:40:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:47.975 21:40:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:47.975 21:40:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:47.975 21:40:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:47.975 21:40:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:47.975 21:40:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:47.975 21:40:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.975 21:40:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.975 21:40:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.975 21:40:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:47.975 "name": "raid_bdev1", 00:13:47.975 "uuid": "474b431a-7d59-4675-be5c-bf979dc98a0c", 00:13:47.975 "strip_size_kb": 0, 00:13:47.975 "state": "online", 00:13:47.975 "raid_level": "raid1", 00:13:47.975 "superblock": true, 00:13:47.975 "num_base_bdevs": 4, 00:13:47.975 "num_base_bdevs_discovered": 3, 00:13:47.975 "num_base_bdevs_operational": 3, 00:13:47.975 "base_bdevs_list": [ 00:13:47.975 { 00:13:47.975 "name": null, 00:13:47.975 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:47.975 "is_configured": false, 00:13:47.975 "data_offset": 2048, 00:13:47.975 "data_size": 63488 00:13:47.975 }, 00:13:47.975 { 00:13:47.975 "name": "pt2", 00:13:47.975 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:47.975 "is_configured": true, 00:13:47.975 "data_offset": 2048, 00:13:47.975 "data_size": 63488 00:13:47.975 }, 00:13:47.975 { 00:13:47.975 "name": "pt3", 00:13:47.975 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:47.975 "is_configured": true, 00:13:47.975 "data_offset": 2048, 00:13:47.975 "data_size": 63488 00:13:47.975 }, 00:13:47.975 { 00:13:47.975 "name": "pt4", 00:13:47.975 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:47.975 "is_configured": true, 00:13:47.975 "data_offset": 2048, 00:13:47.975 "data_size": 63488 00:13:47.975 } 00:13:47.975 ] 00:13:47.975 }' 00:13:47.976 21:40:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:47.976 21:40:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.235 21:40:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:13:48.235 21:40:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.235 21:40:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:13:48.235 21:40:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.235 21:40:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.493 21:40:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:13:48.493 21:40:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:48.494 21:40:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.494 21:40:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.494 21:40:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:13:48.494 [2024-12-10 21:40:49.041678] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:48.494 21:40:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.494 21:40:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 474b431a-7d59-4675-be5c-bf979dc98a0c '!=' 474b431a-7d59-4675-be5c-bf979dc98a0c ']' 00:13:48.494 21:40:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 74680 00:13:48.494 21:40:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 74680 ']' 00:13:48.494 21:40:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 74680 00:13:48.494 21:40:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:13:48.494 21:40:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:48.494 21:40:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74680 00:13:48.494 21:40:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:48.494 21:40:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:48.494 killing process with pid 74680 00:13:48.494 21:40:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74680' 00:13:48.494 21:40:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 74680 00:13:48.494 [2024-12-10 21:40:49.127634] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:48.494 [2024-12-10 21:40:49.127747] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:48.494 21:40:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 74680 00:13:48.494 [2024-12-10 21:40:49.127834] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:48.494 [2024-12-10 21:40:49.127855] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:13:48.753 [2024-12-10 21:40:49.525234] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:50.139 21:40:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:13:50.139 00:13:50.139 real 0m8.623s 00:13:50.139 user 0m13.638s 00:13:50.139 sys 0m1.542s 00:13:50.139 21:40:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:50.139 21:40:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.140 ************************************ 00:13:50.140 END TEST raid_superblock_test 00:13:50.140 ************************************ 00:13:50.140 21:40:50 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 4 read 00:13:50.140 21:40:50 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:13:50.140 21:40:50 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:50.140 21:40:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:50.140 ************************************ 00:13:50.140 START TEST raid_read_error_test 00:13:50.140 ************************************ 00:13:50.140 21:40:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 4 read 00:13:50.140 21:40:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:13:50.140 21:40:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:13:50.140 21:40:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:13:50.140 21:40:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:13:50.140 21:40:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:50.140 21:40:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:13:50.140 21:40:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:50.140 21:40:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:50.140 21:40:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:13:50.140 21:40:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:50.140 21:40:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:50.140 21:40:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:13:50.140 21:40:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:50.140 21:40:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:50.140 21:40:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:13:50.140 21:40:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:50.140 21:40:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:50.140 21:40:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:50.140 21:40:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:13:50.140 21:40:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:13:50.140 21:40:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:13:50.140 21:40:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:13:50.140 21:40:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:13:50.140 21:40:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:13:50.140 21:40:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:13:50.140 21:40:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:13:50.140 21:40:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:13:50.140 21:40:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.RApvrIXPa4 00:13:50.140 21:40:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=75173 00:13:50.140 21:40:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:13:50.140 21:40:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 75173 00:13:50.140 21:40:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 75173 ']' 00:13:50.140 21:40:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:50.140 21:40:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:50.140 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:50.140 21:40:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:50.140 21:40:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:50.140 21:40:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.140 [2024-12-10 21:40:50.852112] Starting SPDK v25.01-pre git sha1 cec5ba284 / DPDK 24.03.0 initialization... 00:13:50.140 [2024-12-10 21:40:50.852242] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75173 ] 00:13:50.399 [2024-12-10 21:40:51.025697] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:50.399 [2024-12-10 21:40:51.138744] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:13:50.657 [2024-12-10 21:40:51.345548] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:50.657 [2024-12-10 21:40:51.345616] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:50.916 21:40:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:50.916 21:40:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:13:50.916 21:40:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:50.916 21:40:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:50.916 21:40:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.916 21:40:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.175 BaseBdev1_malloc 00:13:51.175 21:40:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.175 21:40:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:13:51.175 21:40:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.175 21:40:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.175 true 00:13:51.175 21:40:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.175 21:40:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:13:51.175 21:40:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.175 21:40:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.175 [2024-12-10 21:40:51.753127] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:13:51.175 [2024-12-10 21:40:51.753185] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:51.175 [2024-12-10 21:40:51.753206] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:13:51.175 [2024-12-10 21:40:51.753217] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:51.175 [2024-12-10 21:40:51.755473] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:51.175 [2024-12-10 21:40:51.755511] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:51.175 BaseBdev1 00:13:51.175 21:40:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.175 21:40:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:51.175 21:40:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:51.175 21:40:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.175 21:40:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.175 BaseBdev2_malloc 00:13:51.175 21:40:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.175 21:40:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:13:51.175 21:40:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.175 21:40:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.175 true 00:13:51.175 21:40:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.175 21:40:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:13:51.175 21:40:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.175 21:40:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.175 [2024-12-10 21:40:51.819571] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:13:51.175 [2024-12-10 21:40:51.819626] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:51.175 [2024-12-10 21:40:51.819642] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:13:51.175 [2024-12-10 21:40:51.819651] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:51.175 [2024-12-10 21:40:51.821709] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:51.175 [2024-12-10 21:40:51.821743] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:51.175 BaseBdev2 00:13:51.175 21:40:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.175 21:40:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:51.175 21:40:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:51.175 21:40:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.175 21:40:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.175 BaseBdev3_malloc 00:13:51.175 21:40:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.175 21:40:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:13:51.175 21:40:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.175 21:40:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.175 true 00:13:51.175 21:40:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.175 21:40:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:13:51.175 21:40:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.176 21:40:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.176 [2024-12-10 21:40:51.902562] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:13:51.176 [2024-12-10 21:40:51.902613] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:51.176 [2024-12-10 21:40:51.902630] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:13:51.176 [2024-12-10 21:40:51.902640] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:51.176 [2024-12-10 21:40:51.904682] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:51.176 [2024-12-10 21:40:51.904718] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:51.176 BaseBdev3 00:13:51.176 21:40:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.176 21:40:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:51.176 21:40:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:13:51.176 21:40:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.176 21:40:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.176 BaseBdev4_malloc 00:13:51.176 21:40:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.176 21:40:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:13:51.176 21:40:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.176 21:40:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.435 true 00:13:51.435 21:40:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.435 21:40:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:13:51.435 21:40:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.435 21:40:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.435 [2024-12-10 21:40:51.969214] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:13:51.435 [2024-12-10 21:40:51.969274] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:51.435 [2024-12-10 21:40:51.969294] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:13:51.435 [2024-12-10 21:40:51.969305] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:51.435 [2024-12-10 21:40:51.971522] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:51.435 [2024-12-10 21:40:51.971560] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:13:51.435 BaseBdev4 00:13:51.435 21:40:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.435 21:40:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:13:51.435 21:40:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.435 21:40:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.435 [2024-12-10 21:40:51.981237] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:51.435 [2024-12-10 21:40:51.983091] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:51.435 [2024-12-10 21:40:51.983172] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:51.435 [2024-12-10 21:40:51.983235] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:51.435 [2024-12-10 21:40:51.983499] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:13:51.435 [2024-12-10 21:40:51.983527] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:51.435 [2024-12-10 21:40:51.983818] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:13:51.435 [2024-12-10 21:40:51.984019] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:13:51.435 [2024-12-10 21:40:51.984037] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:13:51.435 [2024-12-10 21:40:51.984227] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:51.435 21:40:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.435 21:40:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:13:51.435 21:40:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:51.435 21:40:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:51.435 21:40:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:51.435 21:40:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:51.435 21:40:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:51.435 21:40:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:51.435 21:40:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:51.435 21:40:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:51.435 21:40:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:51.435 21:40:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:51.435 21:40:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.435 21:40:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:51.435 21:40:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.435 21:40:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.435 21:40:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:51.435 "name": "raid_bdev1", 00:13:51.435 "uuid": "a0b86a74-a1bc-461f-a7ac-cb6fb6965317", 00:13:51.435 "strip_size_kb": 0, 00:13:51.435 "state": "online", 00:13:51.435 "raid_level": "raid1", 00:13:51.435 "superblock": true, 00:13:51.435 "num_base_bdevs": 4, 00:13:51.435 "num_base_bdevs_discovered": 4, 00:13:51.435 "num_base_bdevs_operational": 4, 00:13:51.435 "base_bdevs_list": [ 00:13:51.435 { 00:13:51.435 "name": "BaseBdev1", 00:13:51.435 "uuid": "fee2498f-bdec-5126-bab6-088971101b8b", 00:13:51.435 "is_configured": true, 00:13:51.435 "data_offset": 2048, 00:13:51.435 "data_size": 63488 00:13:51.435 }, 00:13:51.435 { 00:13:51.435 "name": "BaseBdev2", 00:13:51.435 "uuid": "3797e7ca-4bb4-5f17-881f-27f1853e003a", 00:13:51.435 "is_configured": true, 00:13:51.435 "data_offset": 2048, 00:13:51.435 "data_size": 63488 00:13:51.435 }, 00:13:51.435 { 00:13:51.435 "name": "BaseBdev3", 00:13:51.435 "uuid": "36d188fc-242d-5f54-b1f9-419b1d7b55a7", 00:13:51.435 "is_configured": true, 00:13:51.435 "data_offset": 2048, 00:13:51.435 "data_size": 63488 00:13:51.435 }, 00:13:51.435 { 00:13:51.435 "name": "BaseBdev4", 00:13:51.435 "uuid": "f4ec4e97-c722-5615-ae8f-d15a82e0dd88", 00:13:51.435 "is_configured": true, 00:13:51.435 "data_offset": 2048, 00:13:51.435 "data_size": 63488 00:13:51.435 } 00:13:51.435 ] 00:13:51.435 }' 00:13:51.435 21:40:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:51.435 21:40:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.695 21:40:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:51.695 21:40:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:13:51.954 [2024-12-10 21:40:52.550009] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:13:52.888 21:40:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:13:52.888 21:40:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.888 21:40:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.888 21:40:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.888 21:40:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:13:52.888 21:40:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:13:52.888 21:40:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:13:52.888 21:40:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:13:52.888 21:40:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:13:52.888 21:40:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:52.888 21:40:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:52.888 21:40:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:52.888 21:40:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:52.888 21:40:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:52.889 21:40:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:52.889 21:40:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:52.889 21:40:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:52.889 21:40:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:52.889 21:40:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:52.889 21:40:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:52.889 21:40:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.889 21:40:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.889 21:40:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.889 21:40:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:52.889 "name": "raid_bdev1", 00:13:52.889 "uuid": "a0b86a74-a1bc-461f-a7ac-cb6fb6965317", 00:13:52.889 "strip_size_kb": 0, 00:13:52.889 "state": "online", 00:13:52.889 "raid_level": "raid1", 00:13:52.889 "superblock": true, 00:13:52.889 "num_base_bdevs": 4, 00:13:52.889 "num_base_bdevs_discovered": 4, 00:13:52.889 "num_base_bdevs_operational": 4, 00:13:52.889 "base_bdevs_list": [ 00:13:52.889 { 00:13:52.889 "name": "BaseBdev1", 00:13:52.889 "uuid": "fee2498f-bdec-5126-bab6-088971101b8b", 00:13:52.889 "is_configured": true, 00:13:52.889 "data_offset": 2048, 00:13:52.889 "data_size": 63488 00:13:52.889 }, 00:13:52.889 { 00:13:52.889 "name": "BaseBdev2", 00:13:52.889 "uuid": "3797e7ca-4bb4-5f17-881f-27f1853e003a", 00:13:52.889 "is_configured": true, 00:13:52.889 "data_offset": 2048, 00:13:52.889 "data_size": 63488 00:13:52.889 }, 00:13:52.889 { 00:13:52.889 "name": "BaseBdev3", 00:13:52.889 "uuid": "36d188fc-242d-5f54-b1f9-419b1d7b55a7", 00:13:52.889 "is_configured": true, 00:13:52.889 "data_offset": 2048, 00:13:52.889 "data_size": 63488 00:13:52.889 }, 00:13:52.889 { 00:13:52.889 "name": "BaseBdev4", 00:13:52.889 "uuid": "f4ec4e97-c722-5615-ae8f-d15a82e0dd88", 00:13:52.889 "is_configured": true, 00:13:52.889 "data_offset": 2048, 00:13:52.889 "data_size": 63488 00:13:52.889 } 00:13:52.889 ] 00:13:52.889 }' 00:13:52.889 21:40:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:52.889 21:40:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.455 21:40:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:53.455 21:40:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.455 21:40:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.455 [2024-12-10 21:40:53.934774] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:53.455 [2024-12-10 21:40:53.934815] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:53.455 [2024-12-10 21:40:53.937750] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:53.455 [2024-12-10 21:40:53.937819] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:53.455 [2024-12-10 21:40:53.937936] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:53.455 [2024-12-10 21:40:53.937949] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:13:53.455 { 00:13:53.455 "results": [ 00:13:53.455 { 00:13:53.455 "job": "raid_bdev1", 00:13:53.455 "core_mask": "0x1", 00:13:53.455 "workload": "randrw", 00:13:53.455 "percentage": 50, 00:13:53.455 "status": "finished", 00:13:53.455 "queue_depth": 1, 00:13:53.455 "io_size": 131072, 00:13:53.455 "runtime": 1.385421, 00:13:53.455 "iops": 9641.112701482076, 00:13:53.455 "mibps": 1205.1390876852595, 00:13:53.455 "io_failed": 0, 00:13:53.455 "io_timeout": 0, 00:13:53.455 "avg_latency_us": 100.63758348581922, 00:13:53.455 "min_latency_us": 24.929257641921396, 00:13:53.455 "max_latency_us": 1430.9170305676855 00:13:53.455 } 00:13:53.455 ], 00:13:53.455 "core_count": 1 00:13:53.455 } 00:13:53.455 21:40:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.455 21:40:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 75173 00:13:53.455 21:40:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 75173 ']' 00:13:53.456 21:40:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 75173 00:13:53.456 21:40:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:13:53.456 21:40:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:53.456 21:40:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75173 00:13:53.456 21:40:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:53.456 21:40:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:53.456 killing process with pid 75173 00:13:53.456 21:40:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75173' 00:13:53.456 21:40:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 75173 00:13:53.456 [2024-12-10 21:40:53.980161] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:53.456 21:40:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 75173 00:13:53.714 [2024-12-10 21:40:54.322756] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:55.089 21:40:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:13:55.089 21:40:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.RApvrIXPa4 00:13:55.089 21:40:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:13:55.089 21:40:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:13:55.089 21:40:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:13:55.089 21:40:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:55.089 21:40:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:13:55.089 21:40:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:13:55.089 00:13:55.089 real 0m4.864s 00:13:55.089 user 0m5.747s 00:13:55.089 sys 0m0.589s 00:13:55.089 21:40:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:55.089 21:40:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.089 ************************************ 00:13:55.089 END TEST raid_read_error_test 00:13:55.089 ************************************ 00:13:55.089 21:40:55 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 4 write 00:13:55.089 21:40:55 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:13:55.089 21:40:55 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:55.089 21:40:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:55.089 ************************************ 00:13:55.089 START TEST raid_write_error_test 00:13:55.089 ************************************ 00:13:55.089 21:40:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 4 write 00:13:55.089 21:40:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:13:55.089 21:40:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:13:55.089 21:40:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:13:55.089 21:40:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:13:55.089 21:40:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:55.089 21:40:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:13:55.089 21:40:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:55.089 21:40:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:55.089 21:40:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:13:55.089 21:40:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:55.089 21:40:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:55.089 21:40:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:13:55.089 21:40:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:55.089 21:40:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:55.089 21:40:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:13:55.089 21:40:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:55.089 21:40:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:55.089 21:40:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:55.089 21:40:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:13:55.089 21:40:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:13:55.089 21:40:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:13:55.089 21:40:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:13:55.089 21:40:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:13:55.089 21:40:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:13:55.089 21:40:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:13:55.089 21:40:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:13:55.089 21:40:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:13:55.089 21:40:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.gKBPo67IHF 00:13:55.089 21:40:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=75316 00:13:55.089 21:40:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:13:55.089 21:40:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 75316 00:13:55.089 21:40:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 75316 ']' 00:13:55.089 21:40:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:55.089 21:40:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:55.089 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:55.089 21:40:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:55.089 21:40:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:55.089 21:40:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.089 [2024-12-10 21:40:55.783148] Starting SPDK v25.01-pre git sha1 cec5ba284 / DPDK 24.03.0 initialization... 00:13:55.089 [2024-12-10 21:40:55.783265] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75316 ] 00:13:55.348 [2024-12-10 21:40:55.956877] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:55.349 [2024-12-10 21:40:56.077009] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:13:55.608 [2024-12-10 21:40:56.284878] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:55.608 [2024-12-10 21:40:56.284948] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:55.867 21:40:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:55.867 21:40:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:13:55.867 21:40:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:55.867 21:40:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:55.867 21:40:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.867 21:40:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.125 BaseBdev1_malloc 00:13:56.125 21:40:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.125 21:40:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:13:56.125 21:40:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.125 21:40:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.125 true 00:13:56.125 21:40:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.125 21:40:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:13:56.125 21:40:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.125 21:40:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.125 [2024-12-10 21:40:56.705566] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:13:56.125 [2024-12-10 21:40:56.705637] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:56.125 [2024-12-10 21:40:56.705668] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:13:56.125 [2024-12-10 21:40:56.705679] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:56.125 [2024-12-10 21:40:56.708075] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:56.125 [2024-12-10 21:40:56.708117] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:56.125 BaseBdev1 00:13:56.125 21:40:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.125 21:40:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:56.125 21:40:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:56.125 21:40:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.125 21:40:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.125 BaseBdev2_malloc 00:13:56.125 21:40:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.125 21:40:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:13:56.125 21:40:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.125 21:40:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.125 true 00:13:56.125 21:40:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.125 21:40:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:13:56.125 21:40:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.125 21:40:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.125 [2024-12-10 21:40:56.777103] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:13:56.125 [2024-12-10 21:40:56.777165] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:56.125 [2024-12-10 21:40:56.777185] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:13:56.125 [2024-12-10 21:40:56.777195] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:56.125 [2024-12-10 21:40:56.779557] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:56.125 [2024-12-10 21:40:56.779597] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:56.125 BaseBdev2 00:13:56.125 21:40:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.125 21:40:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:56.125 21:40:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:56.125 21:40:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.125 21:40:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.125 BaseBdev3_malloc 00:13:56.125 21:40:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.125 21:40:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:13:56.125 21:40:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.125 21:40:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.125 true 00:13:56.125 21:40:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.125 21:40:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:13:56.126 21:40:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.126 21:40:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.126 [2024-12-10 21:40:56.860341] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:13:56.126 [2024-12-10 21:40:56.860394] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:56.126 [2024-12-10 21:40:56.860412] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:13:56.126 [2024-12-10 21:40:56.860433] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:56.126 [2024-12-10 21:40:56.862485] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:56.126 [2024-12-10 21:40:56.862517] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:56.126 BaseBdev3 00:13:56.126 21:40:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.126 21:40:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:56.126 21:40:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:13:56.126 21:40:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.126 21:40:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.385 BaseBdev4_malloc 00:13:56.385 21:40:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.385 21:40:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:13:56.385 21:40:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.385 21:40:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.385 true 00:13:56.385 21:40:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.385 21:40:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:13:56.385 21:40:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.385 21:40:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.385 [2024-12-10 21:40:56.932568] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:13:56.385 [2024-12-10 21:40:56.932623] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:56.385 [2024-12-10 21:40:56.932642] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:13:56.385 [2024-12-10 21:40:56.932653] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:56.385 [2024-12-10 21:40:56.934891] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:56.385 [2024-12-10 21:40:56.934926] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:13:56.385 BaseBdev4 00:13:56.385 21:40:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.385 21:40:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:13:56.385 21:40:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.385 21:40:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.385 [2024-12-10 21:40:56.944635] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:56.385 [2024-12-10 21:40:56.946481] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:56.385 [2024-12-10 21:40:56.946561] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:56.385 [2024-12-10 21:40:56.946625] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:56.385 [2024-12-10 21:40:56.946855] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:13:56.385 [2024-12-10 21:40:56.946880] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:56.385 [2024-12-10 21:40:56.947143] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:13:56.385 [2024-12-10 21:40:56.947332] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:13:56.385 [2024-12-10 21:40:56.947349] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:13:56.385 [2024-12-10 21:40:56.947571] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:56.385 21:40:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.385 21:40:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:13:56.385 21:40:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:56.385 21:40:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:56.385 21:40:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:56.385 21:40:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:56.385 21:40:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:56.385 21:40:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:56.385 21:40:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:56.385 21:40:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:56.385 21:40:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:56.385 21:40:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:56.385 21:40:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:56.385 21:40:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.385 21:40:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.386 21:40:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.386 21:40:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:56.386 "name": "raid_bdev1", 00:13:56.386 "uuid": "fb36fed5-a09b-4906-b1a9-8314b30d6653", 00:13:56.386 "strip_size_kb": 0, 00:13:56.386 "state": "online", 00:13:56.386 "raid_level": "raid1", 00:13:56.386 "superblock": true, 00:13:56.386 "num_base_bdevs": 4, 00:13:56.386 "num_base_bdevs_discovered": 4, 00:13:56.386 "num_base_bdevs_operational": 4, 00:13:56.386 "base_bdevs_list": [ 00:13:56.386 { 00:13:56.386 "name": "BaseBdev1", 00:13:56.386 "uuid": "c755c764-fafd-5408-914c-ee5a38bfd696", 00:13:56.386 "is_configured": true, 00:13:56.386 "data_offset": 2048, 00:13:56.386 "data_size": 63488 00:13:56.386 }, 00:13:56.386 { 00:13:56.386 "name": "BaseBdev2", 00:13:56.386 "uuid": "67369fb9-f264-511b-b22e-b1c8803f1ffa", 00:13:56.386 "is_configured": true, 00:13:56.386 "data_offset": 2048, 00:13:56.386 "data_size": 63488 00:13:56.386 }, 00:13:56.386 { 00:13:56.386 "name": "BaseBdev3", 00:13:56.386 "uuid": "aa405fd6-4664-5298-934e-1418dd0f06fb", 00:13:56.386 "is_configured": true, 00:13:56.386 "data_offset": 2048, 00:13:56.386 "data_size": 63488 00:13:56.386 }, 00:13:56.386 { 00:13:56.386 "name": "BaseBdev4", 00:13:56.386 "uuid": "2192d1e2-2d8c-59ee-a836-fec1fb7597b6", 00:13:56.386 "is_configured": true, 00:13:56.386 "data_offset": 2048, 00:13:56.386 "data_size": 63488 00:13:56.386 } 00:13:56.386 ] 00:13:56.386 }' 00:13:56.386 21:40:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:56.386 21:40:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.644 21:40:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:56.644 21:40:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:13:56.903 [2024-12-10 21:40:57.477266] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:13:57.841 21:40:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:13:57.841 21:40:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.841 21:40:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.841 [2024-12-10 21:40:58.412748] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:13:57.841 [2024-12-10 21:40:58.412822] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:57.841 [2024-12-10 21:40:58.413052] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006a40 00:13:57.841 21:40:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.841 21:40:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:13:57.841 21:40:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:13:57.841 21:40:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:13:57.841 21:40:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:13:57.841 21:40:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:57.841 21:40:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:57.841 21:40:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:57.841 21:40:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:57.841 21:40:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:57.841 21:40:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:57.841 21:40:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:57.841 21:40:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:57.841 21:40:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:57.841 21:40:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:57.841 21:40:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:57.841 21:40:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.841 21:40:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:57.841 21:40:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.841 21:40:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.841 21:40:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:57.841 "name": "raid_bdev1", 00:13:57.841 "uuid": "fb36fed5-a09b-4906-b1a9-8314b30d6653", 00:13:57.841 "strip_size_kb": 0, 00:13:57.841 "state": "online", 00:13:57.841 "raid_level": "raid1", 00:13:57.841 "superblock": true, 00:13:57.841 "num_base_bdevs": 4, 00:13:57.841 "num_base_bdevs_discovered": 3, 00:13:57.841 "num_base_bdevs_operational": 3, 00:13:57.841 "base_bdevs_list": [ 00:13:57.841 { 00:13:57.841 "name": null, 00:13:57.841 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:57.841 "is_configured": false, 00:13:57.841 "data_offset": 0, 00:13:57.841 "data_size": 63488 00:13:57.841 }, 00:13:57.841 { 00:13:57.841 "name": "BaseBdev2", 00:13:57.841 "uuid": "67369fb9-f264-511b-b22e-b1c8803f1ffa", 00:13:57.841 "is_configured": true, 00:13:57.841 "data_offset": 2048, 00:13:57.841 "data_size": 63488 00:13:57.841 }, 00:13:57.841 { 00:13:57.841 "name": "BaseBdev3", 00:13:57.841 "uuid": "aa405fd6-4664-5298-934e-1418dd0f06fb", 00:13:57.841 "is_configured": true, 00:13:57.841 "data_offset": 2048, 00:13:57.841 "data_size": 63488 00:13:57.841 }, 00:13:57.841 { 00:13:57.841 "name": "BaseBdev4", 00:13:57.841 "uuid": "2192d1e2-2d8c-59ee-a836-fec1fb7597b6", 00:13:57.841 "is_configured": true, 00:13:57.841 "data_offset": 2048, 00:13:57.841 "data_size": 63488 00:13:57.841 } 00:13:57.841 ] 00:13:57.841 }' 00:13:57.841 21:40:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:57.841 21:40:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.100 21:40:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:58.100 21:40:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.100 21:40:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.100 [2024-12-10 21:40:58.787963] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:58.100 [2024-12-10 21:40:58.787997] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:58.100 [2024-12-10 21:40:58.790990] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:58.100 [2024-12-10 21:40:58.791041] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:58.100 [2024-12-10 21:40:58.791146] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:58.100 [2024-12-10 21:40:58.791162] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:13:58.100 { 00:13:58.100 "results": [ 00:13:58.100 { 00:13:58.100 "job": "raid_bdev1", 00:13:58.100 "core_mask": "0x1", 00:13:58.100 "workload": "randrw", 00:13:58.100 "percentage": 50, 00:13:58.100 "status": "finished", 00:13:58.100 "queue_depth": 1, 00:13:58.100 "io_size": 131072, 00:13:58.100 "runtime": 1.311403, 00:13:58.100 "iops": 10915.79018806576, 00:13:58.100 "mibps": 1364.47377350822, 00:13:58.100 "io_failed": 0, 00:13:58.100 "io_timeout": 0, 00:13:58.100 "avg_latency_us": 88.7436652852918, 00:13:58.100 "min_latency_us": 23.699563318777294, 00:13:58.100 "max_latency_us": 1523.926637554585 00:13:58.100 } 00:13:58.100 ], 00:13:58.100 "core_count": 1 00:13:58.100 } 00:13:58.100 21:40:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.100 21:40:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 75316 00:13:58.100 21:40:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 75316 ']' 00:13:58.100 21:40:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 75316 00:13:58.100 21:40:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:13:58.100 21:40:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:58.100 21:40:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75316 00:13:58.100 21:40:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:58.100 21:40:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:58.100 21:40:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75316' 00:13:58.100 killing process with pid 75316 00:13:58.100 21:40:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 75316 00:13:58.100 [2024-12-10 21:40:58.832221] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:58.100 21:40:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 75316 00:13:58.668 [2024-12-10 21:40:59.168658] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:00.046 21:41:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.gKBPo67IHF 00:14:00.046 21:41:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:14:00.046 21:41:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:14:00.046 21:41:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:14:00.046 21:41:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:14:00.046 21:41:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:00.046 21:41:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:14:00.046 21:41:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:14:00.046 00:14:00.046 real 0m4.763s 00:14:00.046 user 0m5.538s 00:14:00.046 sys 0m0.581s 00:14:00.046 21:41:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:00.046 21:41:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.046 ************************************ 00:14:00.046 END TEST raid_write_error_test 00:14:00.046 ************************************ 00:14:00.046 21:41:00 bdev_raid -- bdev/bdev_raid.sh@976 -- # '[' true = true ']' 00:14:00.046 21:41:00 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:14:00.046 21:41:00 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 2 false false true 00:14:00.046 21:41:00 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:14:00.046 21:41:00 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:00.046 21:41:00 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:00.046 ************************************ 00:14:00.046 START TEST raid_rebuild_test 00:14:00.046 ************************************ 00:14:00.046 21:41:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 false false true 00:14:00.046 21:41:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:14:00.046 21:41:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:14:00.046 21:41:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:14:00.046 21:41:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:14:00.046 21:41:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:00.046 21:41:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:00.046 21:41:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:00.046 21:41:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:00.046 21:41:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:00.046 21:41:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:00.046 21:41:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:00.046 21:41:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:00.046 21:41:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:00.046 21:41:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:14:00.047 21:41:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:00.047 21:41:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:00.047 21:41:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:00.047 21:41:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:00.047 21:41:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:00.047 21:41:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:00.047 21:41:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:14:00.047 21:41:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:14:00.047 21:41:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:14:00.047 21:41:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=75462 00:14:00.047 21:41:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:00.047 21:41:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 75462 00:14:00.047 21:41:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 75462 ']' 00:14:00.047 21:41:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:00.047 21:41:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:00.047 21:41:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:00.047 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:00.047 21:41:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:00.047 21:41:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.047 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:00.047 Zero copy mechanism will not be used. 00:14:00.047 [2024-12-10 21:41:00.607722] Starting SPDK v25.01-pre git sha1 cec5ba284 / DPDK 24.03.0 initialization... 00:14:00.047 [2024-12-10 21:41:00.607851] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75462 ] 00:14:00.047 [2024-12-10 21:41:00.784534] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:00.307 [2024-12-10 21:41:00.912288] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:14:00.566 [2024-12-10 21:41:01.128265] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:00.566 [2024-12-10 21:41:01.128337] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:00.825 21:41:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:00.825 21:41:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:14:00.825 21:41:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:00.825 21:41:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:00.825 21:41:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.825 21:41:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.825 BaseBdev1_malloc 00:14:00.825 21:41:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.825 21:41:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:00.825 21:41:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.825 21:41:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.825 [2024-12-10 21:41:01.499918] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:00.825 [2024-12-10 21:41:01.500001] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:00.825 [2024-12-10 21:41:01.500026] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:00.825 [2024-12-10 21:41:01.500039] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:00.825 [2024-12-10 21:41:01.502502] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:00.825 [2024-12-10 21:41:01.502550] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:00.825 BaseBdev1 00:14:00.825 21:41:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.825 21:41:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:00.825 21:41:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:00.825 21:41:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.825 21:41:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.825 BaseBdev2_malloc 00:14:00.825 21:41:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.825 21:41:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:00.825 21:41:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.825 21:41:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.825 [2024-12-10 21:41:01.557851] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:00.825 [2024-12-10 21:41:01.557912] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:00.825 [2024-12-10 21:41:01.557932] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:00.825 [2024-12-10 21:41:01.557945] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:00.825 [2024-12-10 21:41:01.560321] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:00.825 [2024-12-10 21:41:01.560356] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:00.825 BaseBdev2 00:14:00.825 21:41:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.825 21:41:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:14:00.825 21:41:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.825 21:41:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.086 spare_malloc 00:14:01.086 21:41:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.086 21:41:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:01.086 21:41:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.086 21:41:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.086 spare_delay 00:14:01.086 21:41:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.086 21:41:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:01.086 21:41:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.086 21:41:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.086 [2024-12-10 21:41:01.642768] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:01.086 [2024-12-10 21:41:01.642834] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:01.086 [2024-12-10 21:41:01.642857] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:14:01.086 [2024-12-10 21:41:01.642868] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:01.086 [2024-12-10 21:41:01.645202] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:01.086 [2024-12-10 21:41:01.645242] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:01.086 spare 00:14:01.086 21:41:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.086 21:41:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:14:01.086 21:41:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.086 21:41:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.086 [2024-12-10 21:41:01.654800] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:01.086 [2024-12-10 21:41:01.656784] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:01.086 [2024-12-10 21:41:01.656887] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:01.086 [2024-12-10 21:41:01.656903] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:14:01.086 [2024-12-10 21:41:01.657169] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:14:01.086 [2024-12-10 21:41:01.657344] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:01.086 [2024-12-10 21:41:01.657362] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:01.086 [2024-12-10 21:41:01.657557] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:01.086 21:41:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.086 21:41:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:01.086 21:41:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:01.086 21:41:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:01.086 21:41:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:01.086 21:41:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:01.086 21:41:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:01.086 21:41:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:01.086 21:41:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:01.086 21:41:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:01.086 21:41:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:01.086 21:41:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:01.086 21:41:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.086 21:41:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:01.086 21:41:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.086 21:41:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.086 21:41:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:01.086 "name": "raid_bdev1", 00:14:01.086 "uuid": "c323b3ea-758c-4397-bf9b-fad6d3aa9896", 00:14:01.086 "strip_size_kb": 0, 00:14:01.086 "state": "online", 00:14:01.086 "raid_level": "raid1", 00:14:01.086 "superblock": false, 00:14:01.086 "num_base_bdevs": 2, 00:14:01.086 "num_base_bdevs_discovered": 2, 00:14:01.086 "num_base_bdevs_operational": 2, 00:14:01.086 "base_bdevs_list": [ 00:14:01.086 { 00:14:01.086 "name": "BaseBdev1", 00:14:01.086 "uuid": "2d4c8732-db9b-5b6d-8d53-d69d13e34b1e", 00:14:01.086 "is_configured": true, 00:14:01.086 "data_offset": 0, 00:14:01.086 "data_size": 65536 00:14:01.086 }, 00:14:01.086 { 00:14:01.086 "name": "BaseBdev2", 00:14:01.086 "uuid": "a09feea3-f923-5533-ab1e-c7e695abe0c8", 00:14:01.086 "is_configured": true, 00:14:01.086 "data_offset": 0, 00:14:01.086 "data_size": 65536 00:14:01.086 } 00:14:01.086 ] 00:14:01.086 }' 00:14:01.086 21:41:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:01.086 21:41:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.655 21:41:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:01.655 21:41:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:01.655 21:41:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.655 21:41:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.655 [2024-12-10 21:41:02.150312] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:01.655 21:41:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.655 21:41:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:14:01.655 21:41:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:01.655 21:41:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:01.655 21:41:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.655 21:41:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.655 21:41:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.655 21:41:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:14:01.655 21:41:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:14:01.655 21:41:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:14:01.655 21:41:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:14:01.655 21:41:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:14:01.655 21:41:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:01.655 21:41:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:14:01.655 21:41:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:01.655 21:41:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:01.655 21:41:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:01.655 21:41:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:14:01.655 21:41:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:01.655 21:41:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:01.655 21:41:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:14:01.915 [2024-12-10 21:41:02.453543] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:14:01.915 /dev/nbd0 00:14:01.915 21:41:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:01.915 21:41:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:01.915 21:41:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:01.915 21:41:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:14:01.915 21:41:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:01.915 21:41:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:01.915 21:41:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:01.915 21:41:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:14:01.915 21:41:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:01.915 21:41:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:01.915 21:41:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:01.915 1+0 records in 00:14:01.915 1+0 records out 00:14:01.915 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000381345 s, 10.7 MB/s 00:14:01.915 21:41:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:01.915 21:41:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:14:01.915 21:41:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:01.915 21:41:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:01.915 21:41:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:14:01.915 21:41:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:01.915 21:41:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:01.915 21:41:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:14:01.915 21:41:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:14:01.915 21:41:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:14:07.187 65536+0 records in 00:14:07.187 65536+0 records out 00:14:07.187 33554432 bytes (34 MB, 32 MiB) copied, 4.50809 s, 7.4 MB/s 00:14:07.187 21:41:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:07.187 21:41:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:07.187 21:41:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:07.187 21:41:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:07.187 21:41:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:14:07.187 21:41:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:07.187 21:41:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:07.187 21:41:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:07.187 [2024-12-10 21:41:07.274676] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:07.187 21:41:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:07.187 21:41:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:07.187 21:41:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:07.187 21:41:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:07.187 21:41:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:07.187 21:41:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:14:07.187 21:41:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:14:07.187 21:41:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:07.187 21:41:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.187 21:41:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.187 [2024-12-10 21:41:07.286800] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:07.187 21:41:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.187 21:41:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:07.187 21:41:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:07.187 21:41:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:07.187 21:41:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:07.187 21:41:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:07.187 21:41:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:07.188 21:41:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:07.188 21:41:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:07.188 21:41:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:07.188 21:41:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:07.188 21:41:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:07.188 21:41:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:07.188 21:41:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.188 21:41:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.188 21:41:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.188 21:41:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:07.188 "name": "raid_bdev1", 00:14:07.188 "uuid": "c323b3ea-758c-4397-bf9b-fad6d3aa9896", 00:14:07.188 "strip_size_kb": 0, 00:14:07.188 "state": "online", 00:14:07.188 "raid_level": "raid1", 00:14:07.188 "superblock": false, 00:14:07.188 "num_base_bdevs": 2, 00:14:07.188 "num_base_bdevs_discovered": 1, 00:14:07.188 "num_base_bdevs_operational": 1, 00:14:07.188 "base_bdevs_list": [ 00:14:07.188 { 00:14:07.188 "name": null, 00:14:07.188 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:07.188 "is_configured": false, 00:14:07.188 "data_offset": 0, 00:14:07.188 "data_size": 65536 00:14:07.188 }, 00:14:07.188 { 00:14:07.188 "name": "BaseBdev2", 00:14:07.188 "uuid": "a09feea3-f923-5533-ab1e-c7e695abe0c8", 00:14:07.188 "is_configured": true, 00:14:07.188 "data_offset": 0, 00:14:07.188 "data_size": 65536 00:14:07.188 } 00:14:07.188 ] 00:14:07.188 }' 00:14:07.188 21:41:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:07.188 21:41:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.188 21:41:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:07.188 21:41:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.188 21:41:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.188 [2024-12-10 21:41:07.742043] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:07.188 [2024-12-10 21:41:07.761166] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09bd0 00:14:07.188 21:41:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.188 21:41:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:07.188 [2024-12-10 21:41:07.763159] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:08.158 21:41:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:08.158 21:41:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:08.158 21:41:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:08.158 21:41:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:08.158 21:41:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:08.158 21:41:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:08.158 21:41:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:08.158 21:41:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.158 21:41:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.158 21:41:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.158 21:41:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:08.158 "name": "raid_bdev1", 00:14:08.158 "uuid": "c323b3ea-758c-4397-bf9b-fad6d3aa9896", 00:14:08.158 "strip_size_kb": 0, 00:14:08.158 "state": "online", 00:14:08.158 "raid_level": "raid1", 00:14:08.158 "superblock": false, 00:14:08.158 "num_base_bdevs": 2, 00:14:08.158 "num_base_bdevs_discovered": 2, 00:14:08.158 "num_base_bdevs_operational": 2, 00:14:08.158 "process": { 00:14:08.158 "type": "rebuild", 00:14:08.158 "target": "spare", 00:14:08.158 "progress": { 00:14:08.158 "blocks": 20480, 00:14:08.158 "percent": 31 00:14:08.158 } 00:14:08.158 }, 00:14:08.158 "base_bdevs_list": [ 00:14:08.158 { 00:14:08.158 "name": "spare", 00:14:08.158 "uuid": "998a73d2-c9a4-5e21-82f4-a5bfad586274", 00:14:08.158 "is_configured": true, 00:14:08.158 "data_offset": 0, 00:14:08.158 "data_size": 65536 00:14:08.158 }, 00:14:08.158 { 00:14:08.158 "name": "BaseBdev2", 00:14:08.158 "uuid": "a09feea3-f923-5533-ab1e-c7e695abe0c8", 00:14:08.158 "is_configured": true, 00:14:08.158 "data_offset": 0, 00:14:08.158 "data_size": 65536 00:14:08.158 } 00:14:08.158 ] 00:14:08.158 }' 00:14:08.158 21:41:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:08.158 21:41:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:08.158 21:41:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:08.158 21:41:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:08.158 21:41:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:08.158 21:41:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.158 21:41:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.158 [2024-12-10 21:41:08.914730] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:08.417 [2024-12-10 21:41:08.969216] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:08.417 [2024-12-10 21:41:08.969278] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:08.417 [2024-12-10 21:41:08.969292] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:08.417 [2024-12-10 21:41:08.969302] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:08.417 21:41:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.417 21:41:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:08.417 21:41:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:08.417 21:41:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:08.417 21:41:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:08.417 21:41:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:08.417 21:41:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:08.417 21:41:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:08.417 21:41:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:08.417 21:41:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:08.417 21:41:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:08.417 21:41:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:08.417 21:41:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:08.417 21:41:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.417 21:41:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.417 21:41:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.417 21:41:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:08.417 "name": "raid_bdev1", 00:14:08.417 "uuid": "c323b3ea-758c-4397-bf9b-fad6d3aa9896", 00:14:08.417 "strip_size_kb": 0, 00:14:08.417 "state": "online", 00:14:08.417 "raid_level": "raid1", 00:14:08.417 "superblock": false, 00:14:08.417 "num_base_bdevs": 2, 00:14:08.417 "num_base_bdevs_discovered": 1, 00:14:08.417 "num_base_bdevs_operational": 1, 00:14:08.417 "base_bdevs_list": [ 00:14:08.417 { 00:14:08.417 "name": null, 00:14:08.417 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:08.417 "is_configured": false, 00:14:08.417 "data_offset": 0, 00:14:08.417 "data_size": 65536 00:14:08.417 }, 00:14:08.417 { 00:14:08.417 "name": "BaseBdev2", 00:14:08.417 "uuid": "a09feea3-f923-5533-ab1e-c7e695abe0c8", 00:14:08.417 "is_configured": true, 00:14:08.417 "data_offset": 0, 00:14:08.417 "data_size": 65536 00:14:08.417 } 00:14:08.417 ] 00:14:08.417 }' 00:14:08.417 21:41:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:08.417 21:41:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.677 21:41:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:08.677 21:41:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:08.677 21:41:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:08.677 21:41:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:08.677 21:41:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:08.677 21:41:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:08.677 21:41:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:08.677 21:41:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.677 21:41:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.677 21:41:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.677 21:41:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:08.677 "name": "raid_bdev1", 00:14:08.677 "uuid": "c323b3ea-758c-4397-bf9b-fad6d3aa9896", 00:14:08.677 "strip_size_kb": 0, 00:14:08.677 "state": "online", 00:14:08.677 "raid_level": "raid1", 00:14:08.677 "superblock": false, 00:14:08.677 "num_base_bdevs": 2, 00:14:08.677 "num_base_bdevs_discovered": 1, 00:14:08.677 "num_base_bdevs_operational": 1, 00:14:08.677 "base_bdevs_list": [ 00:14:08.677 { 00:14:08.677 "name": null, 00:14:08.677 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:08.677 "is_configured": false, 00:14:08.677 "data_offset": 0, 00:14:08.677 "data_size": 65536 00:14:08.677 }, 00:14:08.677 { 00:14:08.677 "name": "BaseBdev2", 00:14:08.677 "uuid": "a09feea3-f923-5533-ab1e-c7e695abe0c8", 00:14:08.677 "is_configured": true, 00:14:08.677 "data_offset": 0, 00:14:08.677 "data_size": 65536 00:14:08.677 } 00:14:08.677 ] 00:14:08.677 }' 00:14:08.677 21:41:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:08.937 21:41:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:08.937 21:41:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:08.937 21:41:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:08.937 21:41:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:08.937 21:41:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.937 21:41:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.937 [2024-12-10 21:41:09.536477] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:08.937 [2024-12-10 21:41:09.552998] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09ca0 00:14:08.937 21:41:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.937 21:41:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:08.937 [2024-12-10 21:41:09.554854] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:09.875 21:41:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:09.876 21:41:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:09.876 21:41:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:09.876 21:41:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:09.876 21:41:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:09.876 21:41:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:09.876 21:41:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:09.876 21:41:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.876 21:41:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.876 21:41:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.876 21:41:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:09.876 "name": "raid_bdev1", 00:14:09.876 "uuid": "c323b3ea-758c-4397-bf9b-fad6d3aa9896", 00:14:09.876 "strip_size_kb": 0, 00:14:09.876 "state": "online", 00:14:09.876 "raid_level": "raid1", 00:14:09.876 "superblock": false, 00:14:09.876 "num_base_bdevs": 2, 00:14:09.876 "num_base_bdevs_discovered": 2, 00:14:09.876 "num_base_bdevs_operational": 2, 00:14:09.876 "process": { 00:14:09.876 "type": "rebuild", 00:14:09.876 "target": "spare", 00:14:09.876 "progress": { 00:14:09.876 "blocks": 20480, 00:14:09.876 "percent": 31 00:14:09.876 } 00:14:09.876 }, 00:14:09.876 "base_bdevs_list": [ 00:14:09.876 { 00:14:09.876 "name": "spare", 00:14:09.876 "uuid": "998a73d2-c9a4-5e21-82f4-a5bfad586274", 00:14:09.876 "is_configured": true, 00:14:09.876 "data_offset": 0, 00:14:09.876 "data_size": 65536 00:14:09.876 }, 00:14:09.876 { 00:14:09.876 "name": "BaseBdev2", 00:14:09.876 "uuid": "a09feea3-f923-5533-ab1e-c7e695abe0c8", 00:14:09.876 "is_configured": true, 00:14:09.876 "data_offset": 0, 00:14:09.876 "data_size": 65536 00:14:09.876 } 00:14:09.876 ] 00:14:09.876 }' 00:14:09.876 21:41:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:09.876 21:41:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:09.876 21:41:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:10.135 21:41:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:10.135 21:41:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:14:10.135 21:41:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:14:10.135 21:41:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:14:10.135 21:41:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:14:10.135 21:41:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=382 00:14:10.135 21:41:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:10.136 21:41:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:10.136 21:41:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:10.136 21:41:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:10.136 21:41:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:10.136 21:41:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:10.136 21:41:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:10.136 21:41:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.136 21:41:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.136 21:41:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:10.136 21:41:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.136 21:41:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:10.136 "name": "raid_bdev1", 00:14:10.136 "uuid": "c323b3ea-758c-4397-bf9b-fad6d3aa9896", 00:14:10.136 "strip_size_kb": 0, 00:14:10.136 "state": "online", 00:14:10.136 "raid_level": "raid1", 00:14:10.136 "superblock": false, 00:14:10.136 "num_base_bdevs": 2, 00:14:10.136 "num_base_bdevs_discovered": 2, 00:14:10.136 "num_base_bdevs_operational": 2, 00:14:10.136 "process": { 00:14:10.136 "type": "rebuild", 00:14:10.136 "target": "spare", 00:14:10.136 "progress": { 00:14:10.136 "blocks": 22528, 00:14:10.136 "percent": 34 00:14:10.136 } 00:14:10.136 }, 00:14:10.136 "base_bdevs_list": [ 00:14:10.136 { 00:14:10.136 "name": "spare", 00:14:10.136 "uuid": "998a73d2-c9a4-5e21-82f4-a5bfad586274", 00:14:10.136 "is_configured": true, 00:14:10.136 "data_offset": 0, 00:14:10.136 "data_size": 65536 00:14:10.136 }, 00:14:10.136 { 00:14:10.136 "name": "BaseBdev2", 00:14:10.136 "uuid": "a09feea3-f923-5533-ab1e-c7e695abe0c8", 00:14:10.136 "is_configured": true, 00:14:10.136 "data_offset": 0, 00:14:10.136 "data_size": 65536 00:14:10.136 } 00:14:10.136 ] 00:14:10.136 }' 00:14:10.136 21:41:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:10.136 21:41:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:10.136 21:41:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:10.136 21:41:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:10.136 21:41:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:11.073 21:41:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:11.073 21:41:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:11.073 21:41:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:11.073 21:41:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:11.073 21:41:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:11.073 21:41:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:11.073 21:41:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:11.073 21:41:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:11.073 21:41:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.073 21:41:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.333 21:41:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.333 21:41:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:11.333 "name": "raid_bdev1", 00:14:11.333 "uuid": "c323b3ea-758c-4397-bf9b-fad6d3aa9896", 00:14:11.333 "strip_size_kb": 0, 00:14:11.333 "state": "online", 00:14:11.333 "raid_level": "raid1", 00:14:11.333 "superblock": false, 00:14:11.333 "num_base_bdevs": 2, 00:14:11.333 "num_base_bdevs_discovered": 2, 00:14:11.333 "num_base_bdevs_operational": 2, 00:14:11.333 "process": { 00:14:11.333 "type": "rebuild", 00:14:11.333 "target": "spare", 00:14:11.333 "progress": { 00:14:11.333 "blocks": 45056, 00:14:11.333 "percent": 68 00:14:11.333 } 00:14:11.333 }, 00:14:11.333 "base_bdevs_list": [ 00:14:11.333 { 00:14:11.333 "name": "spare", 00:14:11.333 "uuid": "998a73d2-c9a4-5e21-82f4-a5bfad586274", 00:14:11.333 "is_configured": true, 00:14:11.333 "data_offset": 0, 00:14:11.333 "data_size": 65536 00:14:11.333 }, 00:14:11.333 { 00:14:11.333 "name": "BaseBdev2", 00:14:11.333 "uuid": "a09feea3-f923-5533-ab1e-c7e695abe0c8", 00:14:11.333 "is_configured": true, 00:14:11.333 "data_offset": 0, 00:14:11.333 "data_size": 65536 00:14:11.333 } 00:14:11.333 ] 00:14:11.333 }' 00:14:11.333 21:41:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:11.333 21:41:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:11.333 21:41:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:11.333 21:41:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:11.333 21:41:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:12.295 [2024-12-10 21:41:12.769801] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:12.295 [2024-12-10 21:41:12.769950] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:12.295 [2024-12-10 21:41:12.770005] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:12.295 21:41:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:12.295 21:41:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:12.295 21:41:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:12.295 21:41:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:12.295 21:41:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:12.295 21:41:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:12.295 21:41:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:12.295 21:41:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.295 21:41:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.295 21:41:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:12.295 21:41:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.295 21:41:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:12.295 "name": "raid_bdev1", 00:14:12.295 "uuid": "c323b3ea-758c-4397-bf9b-fad6d3aa9896", 00:14:12.295 "strip_size_kb": 0, 00:14:12.295 "state": "online", 00:14:12.295 "raid_level": "raid1", 00:14:12.295 "superblock": false, 00:14:12.295 "num_base_bdevs": 2, 00:14:12.295 "num_base_bdevs_discovered": 2, 00:14:12.295 "num_base_bdevs_operational": 2, 00:14:12.295 "base_bdevs_list": [ 00:14:12.295 { 00:14:12.295 "name": "spare", 00:14:12.295 "uuid": "998a73d2-c9a4-5e21-82f4-a5bfad586274", 00:14:12.295 "is_configured": true, 00:14:12.295 "data_offset": 0, 00:14:12.296 "data_size": 65536 00:14:12.296 }, 00:14:12.296 { 00:14:12.296 "name": "BaseBdev2", 00:14:12.296 "uuid": "a09feea3-f923-5533-ab1e-c7e695abe0c8", 00:14:12.296 "is_configured": true, 00:14:12.296 "data_offset": 0, 00:14:12.296 "data_size": 65536 00:14:12.296 } 00:14:12.296 ] 00:14:12.296 }' 00:14:12.296 21:41:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:12.558 21:41:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:12.558 21:41:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:12.558 21:41:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:12.558 21:41:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:14:12.558 21:41:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:12.558 21:41:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:12.558 21:41:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:12.558 21:41:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:12.558 21:41:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:12.558 21:41:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:12.558 21:41:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:12.558 21:41:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.558 21:41:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.558 21:41:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.558 21:41:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:12.558 "name": "raid_bdev1", 00:14:12.558 "uuid": "c323b3ea-758c-4397-bf9b-fad6d3aa9896", 00:14:12.558 "strip_size_kb": 0, 00:14:12.558 "state": "online", 00:14:12.558 "raid_level": "raid1", 00:14:12.558 "superblock": false, 00:14:12.558 "num_base_bdevs": 2, 00:14:12.558 "num_base_bdevs_discovered": 2, 00:14:12.558 "num_base_bdevs_operational": 2, 00:14:12.558 "base_bdevs_list": [ 00:14:12.558 { 00:14:12.558 "name": "spare", 00:14:12.558 "uuid": "998a73d2-c9a4-5e21-82f4-a5bfad586274", 00:14:12.558 "is_configured": true, 00:14:12.558 "data_offset": 0, 00:14:12.558 "data_size": 65536 00:14:12.558 }, 00:14:12.558 { 00:14:12.558 "name": "BaseBdev2", 00:14:12.558 "uuid": "a09feea3-f923-5533-ab1e-c7e695abe0c8", 00:14:12.558 "is_configured": true, 00:14:12.558 "data_offset": 0, 00:14:12.558 "data_size": 65536 00:14:12.558 } 00:14:12.558 ] 00:14:12.558 }' 00:14:12.558 21:41:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:12.558 21:41:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:12.558 21:41:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:12.558 21:41:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:12.558 21:41:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:12.558 21:41:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:12.558 21:41:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:12.558 21:41:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:12.558 21:41:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:12.558 21:41:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:12.558 21:41:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:12.558 21:41:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:12.558 21:41:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:12.558 21:41:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:12.558 21:41:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:12.558 21:41:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:12.558 21:41:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.558 21:41:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.558 21:41:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.558 21:41:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:12.558 "name": "raid_bdev1", 00:14:12.558 "uuid": "c323b3ea-758c-4397-bf9b-fad6d3aa9896", 00:14:12.558 "strip_size_kb": 0, 00:14:12.558 "state": "online", 00:14:12.558 "raid_level": "raid1", 00:14:12.558 "superblock": false, 00:14:12.558 "num_base_bdevs": 2, 00:14:12.558 "num_base_bdevs_discovered": 2, 00:14:12.558 "num_base_bdevs_operational": 2, 00:14:12.558 "base_bdevs_list": [ 00:14:12.558 { 00:14:12.558 "name": "spare", 00:14:12.558 "uuid": "998a73d2-c9a4-5e21-82f4-a5bfad586274", 00:14:12.558 "is_configured": true, 00:14:12.558 "data_offset": 0, 00:14:12.558 "data_size": 65536 00:14:12.558 }, 00:14:12.558 { 00:14:12.558 "name": "BaseBdev2", 00:14:12.558 "uuid": "a09feea3-f923-5533-ab1e-c7e695abe0c8", 00:14:12.558 "is_configured": true, 00:14:12.558 "data_offset": 0, 00:14:12.558 "data_size": 65536 00:14:12.558 } 00:14:12.558 ] 00:14:12.558 }' 00:14:12.558 21:41:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:12.558 21:41:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.127 21:41:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:13.127 21:41:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.127 21:41:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.127 [2024-12-10 21:41:13.683506] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:13.127 [2024-12-10 21:41:13.683538] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:13.127 [2024-12-10 21:41:13.683626] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:13.127 [2024-12-10 21:41:13.683697] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:13.127 [2024-12-10 21:41:13.683717] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:13.127 21:41:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.127 21:41:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:13.127 21:41:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.127 21:41:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.127 21:41:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:14:13.127 21:41:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.127 21:41:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:13.127 21:41:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:13.127 21:41:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:14:13.127 21:41:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:14:13.127 21:41:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:13.127 21:41:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:14:13.127 21:41:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:13.127 21:41:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:13.127 21:41:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:13.127 21:41:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:14:13.127 21:41:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:13.127 21:41:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:13.127 21:41:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:14:13.386 /dev/nbd0 00:14:13.386 21:41:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:13.386 21:41:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:13.386 21:41:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:13.386 21:41:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:14:13.386 21:41:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:13.386 21:41:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:13.386 21:41:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:13.386 21:41:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:14:13.386 21:41:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:13.387 21:41:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:13.387 21:41:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:13.387 1+0 records in 00:14:13.387 1+0 records out 00:14:13.387 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000191971 s, 21.3 MB/s 00:14:13.387 21:41:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:13.387 21:41:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:14:13.387 21:41:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:13.387 21:41:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:13.387 21:41:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:14:13.387 21:41:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:13.387 21:41:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:13.387 21:41:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:14:13.647 /dev/nbd1 00:14:13.647 21:41:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:13.647 21:41:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:13.647 21:41:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:14:13.647 21:41:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:14:13.647 21:41:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:13.647 21:41:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:13.647 21:41:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:14:13.647 21:41:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:14:13.647 21:41:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:13.647 21:41:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:13.647 21:41:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:13.647 1+0 records in 00:14:13.647 1+0 records out 00:14:13.647 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000485535 s, 8.4 MB/s 00:14:13.647 21:41:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:13.647 21:41:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:14:13.647 21:41:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:13.647 21:41:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:13.647 21:41:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:14:13.647 21:41:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:13.647 21:41:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:13.647 21:41:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:14:13.906 21:41:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:14:13.906 21:41:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:13.906 21:41:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:13.906 21:41:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:13.906 21:41:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:14:13.906 21:41:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:13.906 21:41:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:13.906 21:41:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:13.906 21:41:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:13.906 21:41:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:13.906 21:41:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:13.906 21:41:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:13.906 21:41:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:14.166 21:41:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:14:14.166 21:41:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:14:14.166 21:41:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:14.166 21:41:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:14.166 21:41:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:14.166 21:41:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:14.166 21:41:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:14.166 21:41:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:14.166 21:41:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:14.166 21:41:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:14.166 21:41:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:14:14.166 21:41:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:14:14.166 21:41:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:14:14.166 21:41:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 75462 00:14:14.166 21:41:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 75462 ']' 00:14:14.166 21:41:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 75462 00:14:14.166 21:41:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:14:14.166 21:41:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:14.166 21:41:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75462 00:14:14.427 21:41:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:14.427 21:41:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:14.427 killing process with pid 75462 00:14:14.427 21:41:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75462' 00:14:14.427 21:41:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@973 -- # kill 75462 00:14:14.427 Received shutdown signal, test time was about 60.000000 seconds 00:14:14.427 00:14:14.427 Latency(us) 00:14:14.427 [2024-12-10T21:41:15.210Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:14.427 [2024-12-10T21:41:15.210Z] =================================================================================================================== 00:14:14.427 [2024-12-10T21:41:15.210Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:14.427 [2024-12-10 21:41:14.974398] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:14.427 21:41:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@978 -- # wait 75462 00:14:14.687 [2024-12-10 21:41:15.284000] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:16.063 21:41:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:14:16.063 ************************************ 00:14:16.063 END TEST raid_rebuild_test 00:14:16.063 ************************************ 00:14:16.063 00:14:16.063 real 0m15.921s 00:14:16.063 user 0m18.132s 00:14:16.063 sys 0m3.073s 00:14:16.063 21:41:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:16.063 21:41:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.063 21:41:16 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 2 true false true 00:14:16.063 21:41:16 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:14:16.063 21:41:16 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:16.063 21:41:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:16.063 ************************************ 00:14:16.063 START TEST raid_rebuild_test_sb 00:14:16.063 ************************************ 00:14:16.063 21:41:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:14:16.063 21:41:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:14:16.063 21:41:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:14:16.063 21:41:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:14:16.063 21:41:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:14:16.063 21:41:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:16.063 21:41:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:16.063 21:41:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:16.063 21:41:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:16.063 21:41:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:16.063 21:41:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:16.063 21:41:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:16.064 21:41:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:16.064 21:41:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:16.064 21:41:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:14:16.064 21:41:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:16.064 21:41:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:16.064 21:41:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:16.064 21:41:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:16.064 21:41:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:16.064 21:41:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:16.064 21:41:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:14:16.064 21:41:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:14:16.064 21:41:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:14:16.064 21:41:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:14:16.064 21:41:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:16.064 21:41:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=75888 00:14:16.064 21:41:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 75888 00:14:16.064 21:41:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 75888 ']' 00:14:16.064 21:41:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:16.064 21:41:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:16.064 21:41:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:16.064 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:16.064 21:41:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:16.064 21:41:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:16.064 [2024-12-10 21:41:16.601303] Starting SPDK v25.01-pre git sha1 cec5ba284 / DPDK 24.03.0 initialization... 00:14:16.064 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:16.064 Zero copy mechanism will not be used. 00:14:16.064 [2024-12-10 21:41:16.601620] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75888 ] 00:14:16.064 [2024-12-10 21:41:16.764814] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:16.322 [2024-12-10 21:41:16.882901] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:14:16.322 [2024-12-10 21:41:17.088900] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:16.322 [2024-12-10 21:41:17.088935] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:16.890 21:41:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:16.890 21:41:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:14:16.890 21:41:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:16.890 21:41:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:16.890 21:41:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.890 21:41:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:16.890 BaseBdev1_malloc 00:14:16.890 21:41:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.890 21:41:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:16.890 21:41:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.890 21:41:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:16.890 [2024-12-10 21:41:17.545063] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:16.890 [2024-12-10 21:41:17.545131] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:16.890 [2024-12-10 21:41:17.545157] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:16.890 [2024-12-10 21:41:17.545168] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:16.890 [2024-12-10 21:41:17.547371] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:16.890 [2024-12-10 21:41:17.547426] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:16.890 BaseBdev1 00:14:16.890 21:41:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.890 21:41:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:16.890 21:41:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:16.891 21:41:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.891 21:41:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:16.891 BaseBdev2_malloc 00:14:16.891 21:41:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.891 21:41:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:16.891 21:41:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.891 21:41:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:16.891 [2024-12-10 21:41:17.602272] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:16.891 [2024-12-10 21:41:17.602347] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:16.891 [2024-12-10 21:41:17.602370] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:16.891 [2024-12-10 21:41:17.602382] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:16.891 [2024-12-10 21:41:17.604821] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:16.891 [2024-12-10 21:41:17.604867] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:16.891 BaseBdev2 00:14:16.891 21:41:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.891 21:41:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:14:16.891 21:41:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.891 21:41:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:16.891 spare_malloc 00:14:16.891 21:41:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.891 21:41:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:16.891 21:41:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.891 21:41:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:17.150 spare_delay 00:14:17.150 21:41:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.150 21:41:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:17.150 21:41:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.150 21:41:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:17.150 [2024-12-10 21:41:17.680784] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:17.150 [2024-12-10 21:41:17.680920] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:17.150 [2024-12-10 21:41:17.681001] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:14:17.150 [2024-12-10 21:41:17.681034] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:17.150 [2024-12-10 21:41:17.683139] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:17.150 [2024-12-10 21:41:17.683216] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:17.150 spare 00:14:17.150 21:41:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.150 21:41:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:14:17.150 21:41:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.150 21:41:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:17.150 [2024-12-10 21:41:17.688834] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:17.150 [2024-12-10 21:41:17.690653] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:17.150 [2024-12-10 21:41:17.690819] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:17.150 [2024-12-10 21:41:17.690836] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:17.150 [2024-12-10 21:41:17.691078] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:14:17.150 [2024-12-10 21:41:17.691237] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:17.150 [2024-12-10 21:41:17.691246] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:17.150 [2024-12-10 21:41:17.691392] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:17.150 21:41:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.150 21:41:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:17.150 21:41:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:17.150 21:41:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:17.150 21:41:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:17.150 21:41:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:17.150 21:41:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:17.150 21:41:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:17.150 21:41:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:17.150 21:41:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:17.150 21:41:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:17.150 21:41:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:17.150 21:41:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:17.150 21:41:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.150 21:41:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:17.150 21:41:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.150 21:41:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:17.150 "name": "raid_bdev1", 00:14:17.150 "uuid": "36a8878b-7a18-4587-a063-5700a2f340bf", 00:14:17.150 "strip_size_kb": 0, 00:14:17.150 "state": "online", 00:14:17.150 "raid_level": "raid1", 00:14:17.150 "superblock": true, 00:14:17.150 "num_base_bdevs": 2, 00:14:17.150 "num_base_bdevs_discovered": 2, 00:14:17.150 "num_base_bdevs_operational": 2, 00:14:17.150 "base_bdevs_list": [ 00:14:17.150 { 00:14:17.150 "name": "BaseBdev1", 00:14:17.150 "uuid": "fa64c75f-828e-5287-b2c6-0050d6c2f8fd", 00:14:17.150 "is_configured": true, 00:14:17.150 "data_offset": 2048, 00:14:17.150 "data_size": 63488 00:14:17.150 }, 00:14:17.150 { 00:14:17.150 "name": "BaseBdev2", 00:14:17.150 "uuid": "3607dc0d-eafb-5626-856b-2c18a23c739f", 00:14:17.150 "is_configured": true, 00:14:17.150 "data_offset": 2048, 00:14:17.150 "data_size": 63488 00:14:17.150 } 00:14:17.150 ] 00:14:17.150 }' 00:14:17.150 21:41:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:17.150 21:41:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:17.408 21:41:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:17.408 21:41:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:17.409 21:41:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.409 21:41:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:17.409 [2024-12-10 21:41:18.188339] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:17.668 21:41:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.668 21:41:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:14:17.668 21:41:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:17.668 21:41:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:17.668 21:41:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.668 21:41:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:17.668 21:41:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.668 21:41:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:14:17.668 21:41:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:14:17.668 21:41:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:14:17.668 21:41:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:14:17.668 21:41:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:14:17.668 21:41:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:17.668 21:41:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:14:17.668 21:41:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:17.668 21:41:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:17.668 21:41:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:17.668 21:41:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:14:17.668 21:41:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:17.668 21:41:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:17.668 21:41:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:14:17.927 [2024-12-10 21:41:18.479701] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:14:17.927 /dev/nbd0 00:14:17.927 21:41:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:17.927 21:41:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:17.927 21:41:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:17.927 21:41:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:14:17.927 21:41:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:17.927 21:41:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:17.927 21:41:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:17.927 21:41:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:14:17.927 21:41:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:17.927 21:41:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:17.927 21:41:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:17.927 1+0 records in 00:14:17.927 1+0 records out 00:14:17.927 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000724318 s, 5.7 MB/s 00:14:17.927 21:41:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:17.927 21:41:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:14:17.927 21:41:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:17.927 21:41:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:17.927 21:41:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:14:17.927 21:41:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:17.927 21:41:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:17.927 21:41:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:14:17.927 21:41:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:14:17.927 21:41:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:14:22.114 63488+0 records in 00:14:22.114 63488+0 records out 00:14:22.114 32505856 bytes (33 MB, 31 MiB) copied, 4.2313 s, 7.7 MB/s 00:14:22.114 21:41:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:22.114 21:41:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:22.114 21:41:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:22.114 21:41:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:22.114 21:41:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:14:22.114 21:41:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:22.114 21:41:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:22.373 21:41:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:22.373 [2024-12-10 21:41:23.005826] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:22.373 21:41:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:22.373 21:41:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:22.373 21:41:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:22.373 21:41:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:22.373 21:41:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:22.373 21:41:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:14:22.373 21:41:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:14:22.373 21:41:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:22.373 21:41:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.373 21:41:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:22.373 [2024-12-10 21:41:23.014517] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:22.373 21:41:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.373 21:41:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:22.373 21:41:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:22.373 21:41:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:22.373 21:41:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:22.373 21:41:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:22.373 21:41:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:22.373 21:41:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:22.373 21:41:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:22.373 21:41:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:22.373 21:41:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:22.373 21:41:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:22.373 21:41:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:22.373 21:41:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.373 21:41:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:22.373 21:41:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.373 21:41:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:22.373 "name": "raid_bdev1", 00:14:22.373 "uuid": "36a8878b-7a18-4587-a063-5700a2f340bf", 00:14:22.373 "strip_size_kb": 0, 00:14:22.373 "state": "online", 00:14:22.373 "raid_level": "raid1", 00:14:22.373 "superblock": true, 00:14:22.373 "num_base_bdevs": 2, 00:14:22.373 "num_base_bdevs_discovered": 1, 00:14:22.373 "num_base_bdevs_operational": 1, 00:14:22.373 "base_bdevs_list": [ 00:14:22.373 { 00:14:22.373 "name": null, 00:14:22.373 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:22.373 "is_configured": false, 00:14:22.373 "data_offset": 0, 00:14:22.373 "data_size": 63488 00:14:22.373 }, 00:14:22.373 { 00:14:22.373 "name": "BaseBdev2", 00:14:22.373 "uuid": "3607dc0d-eafb-5626-856b-2c18a23c739f", 00:14:22.373 "is_configured": true, 00:14:22.373 "data_offset": 2048, 00:14:22.373 "data_size": 63488 00:14:22.373 } 00:14:22.373 ] 00:14:22.373 }' 00:14:22.373 21:41:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:22.373 21:41:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:22.941 21:41:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:22.941 21:41:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.941 21:41:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:22.941 [2024-12-10 21:41:23.477772] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:22.941 [2024-12-10 21:41:23.496971] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3360 00:14:22.941 21:41:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.941 21:41:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:22.941 [2024-12-10 21:41:23.499163] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:23.877 21:41:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:23.877 21:41:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:23.877 21:41:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:23.877 21:41:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:23.877 21:41:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:23.877 21:41:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:23.877 21:41:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:23.877 21:41:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.877 21:41:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:23.877 21:41:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.877 21:41:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:23.877 "name": "raid_bdev1", 00:14:23.877 "uuid": "36a8878b-7a18-4587-a063-5700a2f340bf", 00:14:23.877 "strip_size_kb": 0, 00:14:23.877 "state": "online", 00:14:23.877 "raid_level": "raid1", 00:14:23.877 "superblock": true, 00:14:23.877 "num_base_bdevs": 2, 00:14:23.877 "num_base_bdevs_discovered": 2, 00:14:23.877 "num_base_bdevs_operational": 2, 00:14:23.877 "process": { 00:14:23.877 "type": "rebuild", 00:14:23.877 "target": "spare", 00:14:23.877 "progress": { 00:14:23.877 "blocks": 20480, 00:14:23.877 "percent": 32 00:14:23.877 } 00:14:23.877 }, 00:14:23.877 "base_bdevs_list": [ 00:14:23.877 { 00:14:23.878 "name": "spare", 00:14:23.878 "uuid": "224f1d6c-c92a-51cc-b44c-ad6b00f89f77", 00:14:23.878 "is_configured": true, 00:14:23.878 "data_offset": 2048, 00:14:23.878 "data_size": 63488 00:14:23.878 }, 00:14:23.878 { 00:14:23.878 "name": "BaseBdev2", 00:14:23.878 "uuid": "3607dc0d-eafb-5626-856b-2c18a23c739f", 00:14:23.878 "is_configured": true, 00:14:23.878 "data_offset": 2048, 00:14:23.878 "data_size": 63488 00:14:23.878 } 00:14:23.878 ] 00:14:23.878 }' 00:14:23.878 21:41:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:23.878 21:41:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:23.878 21:41:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:23.878 21:41:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:23.878 21:41:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:23.878 21:41:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.878 21:41:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.137 [2024-12-10 21:41:24.662021] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:24.137 [2024-12-10 21:41:24.705013] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:24.137 [2024-12-10 21:41:24.705108] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:24.137 [2024-12-10 21:41:24.705124] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:24.137 [2024-12-10 21:41:24.705134] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:24.137 21:41:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.137 21:41:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:24.137 21:41:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:24.137 21:41:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:24.137 21:41:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:24.137 21:41:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:24.137 21:41:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:24.137 21:41:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:24.137 21:41:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:24.137 21:41:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:24.137 21:41:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:24.137 21:41:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:24.137 21:41:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:24.137 21:41:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.137 21:41:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.137 21:41:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.137 21:41:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:24.137 "name": "raid_bdev1", 00:14:24.137 "uuid": "36a8878b-7a18-4587-a063-5700a2f340bf", 00:14:24.137 "strip_size_kb": 0, 00:14:24.137 "state": "online", 00:14:24.137 "raid_level": "raid1", 00:14:24.137 "superblock": true, 00:14:24.137 "num_base_bdevs": 2, 00:14:24.137 "num_base_bdevs_discovered": 1, 00:14:24.137 "num_base_bdevs_operational": 1, 00:14:24.137 "base_bdevs_list": [ 00:14:24.137 { 00:14:24.137 "name": null, 00:14:24.137 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:24.137 "is_configured": false, 00:14:24.137 "data_offset": 0, 00:14:24.137 "data_size": 63488 00:14:24.137 }, 00:14:24.137 { 00:14:24.137 "name": "BaseBdev2", 00:14:24.137 "uuid": "3607dc0d-eafb-5626-856b-2c18a23c739f", 00:14:24.137 "is_configured": true, 00:14:24.137 "data_offset": 2048, 00:14:24.137 "data_size": 63488 00:14:24.137 } 00:14:24.137 ] 00:14:24.137 }' 00:14:24.137 21:41:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:24.137 21:41:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.705 21:41:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:24.705 21:41:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:24.705 21:41:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:24.705 21:41:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:24.705 21:41:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:24.705 21:41:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:24.705 21:41:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.705 21:41:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.705 21:41:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:24.705 21:41:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.705 21:41:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:24.705 "name": "raid_bdev1", 00:14:24.705 "uuid": "36a8878b-7a18-4587-a063-5700a2f340bf", 00:14:24.705 "strip_size_kb": 0, 00:14:24.705 "state": "online", 00:14:24.705 "raid_level": "raid1", 00:14:24.705 "superblock": true, 00:14:24.705 "num_base_bdevs": 2, 00:14:24.705 "num_base_bdevs_discovered": 1, 00:14:24.705 "num_base_bdevs_operational": 1, 00:14:24.705 "base_bdevs_list": [ 00:14:24.705 { 00:14:24.705 "name": null, 00:14:24.705 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:24.705 "is_configured": false, 00:14:24.705 "data_offset": 0, 00:14:24.705 "data_size": 63488 00:14:24.705 }, 00:14:24.705 { 00:14:24.705 "name": "BaseBdev2", 00:14:24.705 "uuid": "3607dc0d-eafb-5626-856b-2c18a23c739f", 00:14:24.705 "is_configured": true, 00:14:24.705 "data_offset": 2048, 00:14:24.705 "data_size": 63488 00:14:24.705 } 00:14:24.705 ] 00:14:24.705 }' 00:14:24.705 21:41:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:24.705 21:41:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:24.705 21:41:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:24.705 21:41:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:24.705 21:41:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:24.706 21:41:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.706 21:41:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.706 [2024-12-10 21:41:25.364127] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:24.706 [2024-12-10 21:41:25.379363] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3430 00:14:24.706 21:41:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.706 21:41:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:24.706 [2024-12-10 21:41:25.381281] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:25.641 21:41:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:25.641 21:41:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:25.641 21:41:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:25.641 21:41:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:25.641 21:41:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:25.641 21:41:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:25.641 21:41:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:25.641 21:41:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.641 21:41:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:25.641 21:41:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.900 21:41:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:25.900 "name": "raid_bdev1", 00:14:25.900 "uuid": "36a8878b-7a18-4587-a063-5700a2f340bf", 00:14:25.900 "strip_size_kb": 0, 00:14:25.900 "state": "online", 00:14:25.900 "raid_level": "raid1", 00:14:25.900 "superblock": true, 00:14:25.900 "num_base_bdevs": 2, 00:14:25.900 "num_base_bdevs_discovered": 2, 00:14:25.900 "num_base_bdevs_operational": 2, 00:14:25.900 "process": { 00:14:25.900 "type": "rebuild", 00:14:25.900 "target": "spare", 00:14:25.900 "progress": { 00:14:25.900 "blocks": 20480, 00:14:25.900 "percent": 32 00:14:25.900 } 00:14:25.900 }, 00:14:25.900 "base_bdevs_list": [ 00:14:25.900 { 00:14:25.900 "name": "spare", 00:14:25.900 "uuid": "224f1d6c-c92a-51cc-b44c-ad6b00f89f77", 00:14:25.900 "is_configured": true, 00:14:25.900 "data_offset": 2048, 00:14:25.900 "data_size": 63488 00:14:25.900 }, 00:14:25.900 { 00:14:25.900 "name": "BaseBdev2", 00:14:25.900 "uuid": "3607dc0d-eafb-5626-856b-2c18a23c739f", 00:14:25.900 "is_configured": true, 00:14:25.900 "data_offset": 2048, 00:14:25.900 "data_size": 63488 00:14:25.900 } 00:14:25.900 ] 00:14:25.900 }' 00:14:25.900 21:41:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:25.900 21:41:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:25.900 21:41:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:25.900 21:41:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:25.900 21:41:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:14:25.900 21:41:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:14:25.900 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:14:25.900 21:41:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:14:25.900 21:41:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:14:25.900 21:41:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:14:25.900 21:41:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=398 00:14:25.900 21:41:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:25.900 21:41:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:25.900 21:41:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:25.900 21:41:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:25.900 21:41:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:25.900 21:41:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:25.900 21:41:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:25.900 21:41:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.900 21:41:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:25.900 21:41:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:25.900 21:41:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.900 21:41:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:25.900 "name": "raid_bdev1", 00:14:25.900 "uuid": "36a8878b-7a18-4587-a063-5700a2f340bf", 00:14:25.900 "strip_size_kb": 0, 00:14:25.900 "state": "online", 00:14:25.900 "raid_level": "raid1", 00:14:25.900 "superblock": true, 00:14:25.900 "num_base_bdevs": 2, 00:14:25.900 "num_base_bdevs_discovered": 2, 00:14:25.900 "num_base_bdevs_operational": 2, 00:14:25.900 "process": { 00:14:25.900 "type": "rebuild", 00:14:25.900 "target": "spare", 00:14:25.900 "progress": { 00:14:25.900 "blocks": 22528, 00:14:25.900 "percent": 35 00:14:25.900 } 00:14:25.900 }, 00:14:25.900 "base_bdevs_list": [ 00:14:25.900 { 00:14:25.900 "name": "spare", 00:14:25.900 "uuid": "224f1d6c-c92a-51cc-b44c-ad6b00f89f77", 00:14:25.900 "is_configured": true, 00:14:25.900 "data_offset": 2048, 00:14:25.900 "data_size": 63488 00:14:25.900 }, 00:14:25.900 { 00:14:25.900 "name": "BaseBdev2", 00:14:25.900 "uuid": "3607dc0d-eafb-5626-856b-2c18a23c739f", 00:14:25.900 "is_configured": true, 00:14:25.900 "data_offset": 2048, 00:14:25.900 "data_size": 63488 00:14:25.900 } 00:14:25.900 ] 00:14:25.900 }' 00:14:25.900 21:41:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:25.900 21:41:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:25.900 21:41:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:25.901 21:41:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:25.901 21:41:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:27.276 21:41:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:27.276 21:41:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:27.276 21:41:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:27.276 21:41:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:27.276 21:41:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:27.277 21:41:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:27.277 21:41:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:27.277 21:41:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:27.277 21:41:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.277 21:41:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:27.277 21:41:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.277 21:41:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:27.277 "name": "raid_bdev1", 00:14:27.277 "uuid": "36a8878b-7a18-4587-a063-5700a2f340bf", 00:14:27.277 "strip_size_kb": 0, 00:14:27.277 "state": "online", 00:14:27.277 "raid_level": "raid1", 00:14:27.277 "superblock": true, 00:14:27.277 "num_base_bdevs": 2, 00:14:27.277 "num_base_bdevs_discovered": 2, 00:14:27.277 "num_base_bdevs_operational": 2, 00:14:27.277 "process": { 00:14:27.277 "type": "rebuild", 00:14:27.277 "target": "spare", 00:14:27.277 "progress": { 00:14:27.277 "blocks": 45056, 00:14:27.277 "percent": 70 00:14:27.277 } 00:14:27.277 }, 00:14:27.277 "base_bdevs_list": [ 00:14:27.277 { 00:14:27.277 "name": "spare", 00:14:27.277 "uuid": "224f1d6c-c92a-51cc-b44c-ad6b00f89f77", 00:14:27.277 "is_configured": true, 00:14:27.277 "data_offset": 2048, 00:14:27.277 "data_size": 63488 00:14:27.277 }, 00:14:27.277 { 00:14:27.277 "name": "BaseBdev2", 00:14:27.277 "uuid": "3607dc0d-eafb-5626-856b-2c18a23c739f", 00:14:27.277 "is_configured": true, 00:14:27.277 "data_offset": 2048, 00:14:27.277 "data_size": 63488 00:14:27.277 } 00:14:27.277 ] 00:14:27.277 }' 00:14:27.277 21:41:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:27.277 21:41:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:27.277 21:41:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:27.277 21:41:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:27.277 21:41:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:27.878 [2024-12-10 21:41:28.495889] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:27.878 [2024-12-10 21:41:28.496037] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:27.878 [2024-12-10 21:41:28.496195] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:28.137 21:41:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:28.137 21:41:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:28.137 21:41:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:28.137 21:41:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:28.137 21:41:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:28.137 21:41:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:28.137 21:41:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:28.137 21:41:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.137 21:41:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:28.137 21:41:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:28.137 21:41:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.137 21:41:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:28.137 "name": "raid_bdev1", 00:14:28.137 "uuid": "36a8878b-7a18-4587-a063-5700a2f340bf", 00:14:28.137 "strip_size_kb": 0, 00:14:28.137 "state": "online", 00:14:28.137 "raid_level": "raid1", 00:14:28.137 "superblock": true, 00:14:28.137 "num_base_bdevs": 2, 00:14:28.137 "num_base_bdevs_discovered": 2, 00:14:28.137 "num_base_bdevs_operational": 2, 00:14:28.137 "base_bdevs_list": [ 00:14:28.137 { 00:14:28.137 "name": "spare", 00:14:28.137 "uuid": "224f1d6c-c92a-51cc-b44c-ad6b00f89f77", 00:14:28.137 "is_configured": true, 00:14:28.137 "data_offset": 2048, 00:14:28.137 "data_size": 63488 00:14:28.137 }, 00:14:28.137 { 00:14:28.137 "name": "BaseBdev2", 00:14:28.137 "uuid": "3607dc0d-eafb-5626-856b-2c18a23c739f", 00:14:28.137 "is_configured": true, 00:14:28.137 "data_offset": 2048, 00:14:28.137 "data_size": 63488 00:14:28.137 } 00:14:28.137 ] 00:14:28.137 }' 00:14:28.137 21:41:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:28.137 21:41:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:28.395 21:41:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:28.395 21:41:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:28.395 21:41:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:14:28.395 21:41:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:28.395 21:41:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:28.395 21:41:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:28.395 21:41:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:28.395 21:41:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:28.395 21:41:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:28.395 21:41:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:28.395 21:41:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.395 21:41:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:28.395 21:41:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.395 21:41:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:28.395 "name": "raid_bdev1", 00:14:28.395 "uuid": "36a8878b-7a18-4587-a063-5700a2f340bf", 00:14:28.395 "strip_size_kb": 0, 00:14:28.395 "state": "online", 00:14:28.395 "raid_level": "raid1", 00:14:28.395 "superblock": true, 00:14:28.395 "num_base_bdevs": 2, 00:14:28.395 "num_base_bdevs_discovered": 2, 00:14:28.395 "num_base_bdevs_operational": 2, 00:14:28.395 "base_bdevs_list": [ 00:14:28.395 { 00:14:28.395 "name": "spare", 00:14:28.395 "uuid": "224f1d6c-c92a-51cc-b44c-ad6b00f89f77", 00:14:28.396 "is_configured": true, 00:14:28.396 "data_offset": 2048, 00:14:28.396 "data_size": 63488 00:14:28.396 }, 00:14:28.396 { 00:14:28.396 "name": "BaseBdev2", 00:14:28.396 "uuid": "3607dc0d-eafb-5626-856b-2c18a23c739f", 00:14:28.396 "is_configured": true, 00:14:28.396 "data_offset": 2048, 00:14:28.396 "data_size": 63488 00:14:28.396 } 00:14:28.396 ] 00:14:28.396 }' 00:14:28.396 21:41:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:28.396 21:41:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:28.396 21:41:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:28.396 21:41:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:28.396 21:41:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:28.396 21:41:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:28.396 21:41:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:28.396 21:41:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:28.396 21:41:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:28.396 21:41:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:28.396 21:41:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:28.396 21:41:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:28.396 21:41:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:28.396 21:41:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:28.396 21:41:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:28.396 21:41:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:28.396 21:41:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.396 21:41:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:28.396 21:41:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.396 21:41:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:28.396 "name": "raid_bdev1", 00:14:28.396 "uuid": "36a8878b-7a18-4587-a063-5700a2f340bf", 00:14:28.396 "strip_size_kb": 0, 00:14:28.396 "state": "online", 00:14:28.396 "raid_level": "raid1", 00:14:28.396 "superblock": true, 00:14:28.396 "num_base_bdevs": 2, 00:14:28.396 "num_base_bdevs_discovered": 2, 00:14:28.396 "num_base_bdevs_operational": 2, 00:14:28.396 "base_bdevs_list": [ 00:14:28.396 { 00:14:28.396 "name": "spare", 00:14:28.396 "uuid": "224f1d6c-c92a-51cc-b44c-ad6b00f89f77", 00:14:28.396 "is_configured": true, 00:14:28.396 "data_offset": 2048, 00:14:28.396 "data_size": 63488 00:14:28.396 }, 00:14:28.396 { 00:14:28.396 "name": "BaseBdev2", 00:14:28.396 "uuid": "3607dc0d-eafb-5626-856b-2c18a23c739f", 00:14:28.396 "is_configured": true, 00:14:28.396 "data_offset": 2048, 00:14:28.396 "data_size": 63488 00:14:28.396 } 00:14:28.396 ] 00:14:28.396 }' 00:14:28.396 21:41:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:28.396 21:41:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:28.962 21:41:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:28.962 21:41:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.962 21:41:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:28.962 [2024-12-10 21:41:29.519314] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:28.962 [2024-12-10 21:41:29.519392] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:28.962 [2024-12-10 21:41:29.519503] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:28.962 [2024-12-10 21:41:29.519591] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:28.962 [2024-12-10 21:41:29.519629] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:28.962 21:41:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.962 21:41:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:28.962 21:41:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.962 21:41:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:28.962 21:41:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:14:28.962 21:41:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.962 21:41:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:28.962 21:41:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:28.962 21:41:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:14:28.962 21:41:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:14:28.962 21:41:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:28.962 21:41:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:14:28.962 21:41:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:28.962 21:41:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:28.962 21:41:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:28.962 21:41:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:14:28.962 21:41:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:28.962 21:41:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:28.962 21:41:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:14:29.219 /dev/nbd0 00:14:29.219 21:41:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:29.219 21:41:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:29.219 21:41:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:29.219 21:41:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:14:29.219 21:41:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:29.219 21:41:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:29.219 21:41:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:29.219 21:41:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:14:29.219 21:41:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:29.219 21:41:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:29.219 21:41:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:29.219 1+0 records in 00:14:29.219 1+0 records out 00:14:29.219 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00055971 s, 7.3 MB/s 00:14:29.219 21:41:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:29.219 21:41:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:14:29.219 21:41:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:29.219 21:41:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:29.219 21:41:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:14:29.219 21:41:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:29.219 21:41:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:29.219 21:41:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:14:29.477 /dev/nbd1 00:14:29.477 21:41:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:29.477 21:41:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:29.477 21:41:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:14:29.477 21:41:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:14:29.477 21:41:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:29.477 21:41:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:29.477 21:41:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:14:29.477 21:41:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:14:29.477 21:41:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:29.477 21:41:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:29.477 21:41:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:29.478 1+0 records in 00:14:29.478 1+0 records out 00:14:29.478 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00042229 s, 9.7 MB/s 00:14:29.478 21:41:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:29.478 21:41:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:14:29.478 21:41:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:29.478 21:41:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:29.478 21:41:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:14:29.478 21:41:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:29.478 21:41:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:29.478 21:41:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:14:29.735 21:41:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:14:29.735 21:41:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:29.735 21:41:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:29.735 21:41:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:29.735 21:41:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:14:29.735 21:41:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:29.735 21:41:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:29.736 21:41:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:29.994 21:41:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:29.994 21:41:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:29.994 21:41:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:29.994 21:41:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:29.994 21:41:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:29.994 21:41:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:14:29.994 21:41:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:14:29.994 21:41:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:29.994 21:41:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:29.994 21:41:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:29.994 21:41:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:29.994 21:41:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:29.994 21:41:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:29.994 21:41:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:29.994 21:41:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:29.994 21:41:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:14:29.994 21:41:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:14:29.994 21:41:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:14:29.994 21:41:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:14:29.994 21:41:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.994 21:41:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:29.994 21:41:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.994 21:41:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:29.994 21:41:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.994 21:41:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:29.994 [2024-12-10 21:41:30.765010] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:29.994 [2024-12-10 21:41:30.765124] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:29.994 [2024-12-10 21:41:30.765154] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:14:29.994 [2024-12-10 21:41:30.765163] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:29.994 [2024-12-10 21:41:30.767458] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:29.994 [2024-12-10 21:41:30.767528] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:29.994 [2024-12-10 21:41:30.767655] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:29.994 [2024-12-10 21:41:30.767741] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:29.994 [2024-12-10 21:41:30.767954] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:29.994 spare 00:14:29.994 21:41:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.994 21:41:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:14:29.994 21:41:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.994 21:41:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.253 [2024-12-10 21:41:30.867910] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:14:30.253 [2024-12-10 21:41:30.868030] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:30.253 [2024-12-10 21:41:30.868405] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1ae0 00:14:30.253 [2024-12-10 21:41:30.868675] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:14:30.253 [2024-12-10 21:41:30.868731] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:14:30.253 [2024-12-10 21:41:30.868994] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:30.253 21:41:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.253 21:41:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:30.253 21:41:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:30.253 21:41:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:30.253 21:41:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:30.253 21:41:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:30.253 21:41:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:30.253 21:41:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:30.253 21:41:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:30.253 21:41:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:30.253 21:41:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:30.253 21:41:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:30.253 21:41:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:30.253 21:41:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.253 21:41:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.253 21:41:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.253 21:41:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:30.253 "name": "raid_bdev1", 00:14:30.253 "uuid": "36a8878b-7a18-4587-a063-5700a2f340bf", 00:14:30.253 "strip_size_kb": 0, 00:14:30.253 "state": "online", 00:14:30.253 "raid_level": "raid1", 00:14:30.253 "superblock": true, 00:14:30.253 "num_base_bdevs": 2, 00:14:30.253 "num_base_bdevs_discovered": 2, 00:14:30.253 "num_base_bdevs_operational": 2, 00:14:30.253 "base_bdevs_list": [ 00:14:30.253 { 00:14:30.253 "name": "spare", 00:14:30.253 "uuid": "224f1d6c-c92a-51cc-b44c-ad6b00f89f77", 00:14:30.253 "is_configured": true, 00:14:30.253 "data_offset": 2048, 00:14:30.253 "data_size": 63488 00:14:30.253 }, 00:14:30.253 { 00:14:30.253 "name": "BaseBdev2", 00:14:30.253 "uuid": "3607dc0d-eafb-5626-856b-2c18a23c739f", 00:14:30.253 "is_configured": true, 00:14:30.253 "data_offset": 2048, 00:14:30.253 "data_size": 63488 00:14:30.253 } 00:14:30.253 ] 00:14:30.253 }' 00:14:30.253 21:41:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:30.253 21:41:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.821 21:41:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:30.821 21:41:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:30.821 21:41:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:30.822 21:41:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:30.822 21:41:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:30.822 21:41:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:30.822 21:41:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:30.822 21:41:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.822 21:41:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.822 21:41:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.822 21:41:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:30.822 "name": "raid_bdev1", 00:14:30.822 "uuid": "36a8878b-7a18-4587-a063-5700a2f340bf", 00:14:30.822 "strip_size_kb": 0, 00:14:30.822 "state": "online", 00:14:30.822 "raid_level": "raid1", 00:14:30.822 "superblock": true, 00:14:30.822 "num_base_bdevs": 2, 00:14:30.822 "num_base_bdevs_discovered": 2, 00:14:30.822 "num_base_bdevs_operational": 2, 00:14:30.822 "base_bdevs_list": [ 00:14:30.822 { 00:14:30.822 "name": "spare", 00:14:30.822 "uuid": "224f1d6c-c92a-51cc-b44c-ad6b00f89f77", 00:14:30.822 "is_configured": true, 00:14:30.822 "data_offset": 2048, 00:14:30.822 "data_size": 63488 00:14:30.822 }, 00:14:30.822 { 00:14:30.822 "name": "BaseBdev2", 00:14:30.822 "uuid": "3607dc0d-eafb-5626-856b-2c18a23c739f", 00:14:30.822 "is_configured": true, 00:14:30.822 "data_offset": 2048, 00:14:30.822 "data_size": 63488 00:14:30.822 } 00:14:30.822 ] 00:14:30.822 }' 00:14:30.822 21:41:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:30.822 21:41:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:30.822 21:41:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:30.822 21:41:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:30.822 21:41:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:30.822 21:41:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:14:30.822 21:41:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.822 21:41:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.822 21:41:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.822 21:41:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:14:30.822 21:41:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:30.822 21:41:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.822 21:41:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.822 [2024-12-10 21:41:31.535929] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:30.822 21:41:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.822 21:41:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:30.822 21:41:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:30.822 21:41:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:30.822 21:41:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:30.822 21:41:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:30.822 21:41:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:30.822 21:41:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:30.822 21:41:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:30.822 21:41:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:30.822 21:41:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:30.822 21:41:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:30.822 21:41:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:30.822 21:41:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.822 21:41:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.822 21:41:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.822 21:41:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:30.822 "name": "raid_bdev1", 00:14:30.822 "uuid": "36a8878b-7a18-4587-a063-5700a2f340bf", 00:14:30.822 "strip_size_kb": 0, 00:14:30.822 "state": "online", 00:14:30.822 "raid_level": "raid1", 00:14:30.822 "superblock": true, 00:14:30.822 "num_base_bdevs": 2, 00:14:30.822 "num_base_bdevs_discovered": 1, 00:14:30.822 "num_base_bdevs_operational": 1, 00:14:30.822 "base_bdevs_list": [ 00:14:30.822 { 00:14:30.822 "name": null, 00:14:30.822 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:30.822 "is_configured": false, 00:14:30.822 "data_offset": 0, 00:14:30.822 "data_size": 63488 00:14:30.822 }, 00:14:30.822 { 00:14:30.822 "name": "BaseBdev2", 00:14:30.822 "uuid": "3607dc0d-eafb-5626-856b-2c18a23c739f", 00:14:30.822 "is_configured": true, 00:14:30.822 "data_offset": 2048, 00:14:30.822 "data_size": 63488 00:14:30.822 } 00:14:30.822 ] 00:14:30.822 }' 00:14:30.822 21:41:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:30.822 21:41:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.388 21:41:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:31.388 21:41:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.388 21:41:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.388 [2024-12-10 21:41:32.055180] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:31.388 [2024-12-10 21:41:32.055464] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:14:31.388 [2024-12-10 21:41:32.055555] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:31.388 [2024-12-10 21:41:32.055625] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:31.388 [2024-12-10 21:41:32.072720] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1bb0 00:14:31.388 21:41:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.388 21:41:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:14:31.388 [2024-12-10 21:41:32.074826] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:32.404 21:41:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:32.404 21:41:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:32.404 21:41:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:32.404 21:41:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:32.404 21:41:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:32.404 21:41:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:32.404 21:41:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:32.404 21:41:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.404 21:41:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.404 21:41:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.404 21:41:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:32.404 "name": "raid_bdev1", 00:14:32.404 "uuid": "36a8878b-7a18-4587-a063-5700a2f340bf", 00:14:32.404 "strip_size_kb": 0, 00:14:32.404 "state": "online", 00:14:32.404 "raid_level": "raid1", 00:14:32.404 "superblock": true, 00:14:32.404 "num_base_bdevs": 2, 00:14:32.404 "num_base_bdevs_discovered": 2, 00:14:32.404 "num_base_bdevs_operational": 2, 00:14:32.404 "process": { 00:14:32.404 "type": "rebuild", 00:14:32.404 "target": "spare", 00:14:32.404 "progress": { 00:14:32.404 "blocks": 20480, 00:14:32.404 "percent": 32 00:14:32.404 } 00:14:32.404 }, 00:14:32.404 "base_bdevs_list": [ 00:14:32.404 { 00:14:32.404 "name": "spare", 00:14:32.404 "uuid": "224f1d6c-c92a-51cc-b44c-ad6b00f89f77", 00:14:32.404 "is_configured": true, 00:14:32.404 "data_offset": 2048, 00:14:32.404 "data_size": 63488 00:14:32.404 }, 00:14:32.404 { 00:14:32.404 "name": "BaseBdev2", 00:14:32.404 "uuid": "3607dc0d-eafb-5626-856b-2c18a23c739f", 00:14:32.404 "is_configured": true, 00:14:32.404 "data_offset": 2048, 00:14:32.404 "data_size": 63488 00:14:32.404 } 00:14:32.404 ] 00:14:32.404 }' 00:14:32.404 21:41:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:32.404 21:41:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:32.404 21:41:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:32.663 21:41:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:32.663 21:41:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:14:32.663 21:41:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.663 21:41:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.663 [2024-12-10 21:41:33.238378] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:32.663 [2024-12-10 21:41:33.280637] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:32.664 [2024-12-10 21:41:33.280806] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:32.664 [2024-12-10 21:41:33.280844] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:32.664 [2024-12-10 21:41:33.280868] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:32.664 21:41:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.664 21:41:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:32.664 21:41:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:32.664 21:41:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:32.664 21:41:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:32.664 21:41:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:32.664 21:41:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:32.664 21:41:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:32.664 21:41:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:32.664 21:41:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:32.664 21:41:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:32.664 21:41:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:32.664 21:41:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.664 21:41:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:32.664 21:41:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.664 21:41:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.664 21:41:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:32.664 "name": "raid_bdev1", 00:14:32.664 "uuid": "36a8878b-7a18-4587-a063-5700a2f340bf", 00:14:32.664 "strip_size_kb": 0, 00:14:32.664 "state": "online", 00:14:32.664 "raid_level": "raid1", 00:14:32.664 "superblock": true, 00:14:32.664 "num_base_bdevs": 2, 00:14:32.664 "num_base_bdevs_discovered": 1, 00:14:32.664 "num_base_bdevs_operational": 1, 00:14:32.664 "base_bdevs_list": [ 00:14:32.664 { 00:14:32.664 "name": null, 00:14:32.664 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:32.664 "is_configured": false, 00:14:32.664 "data_offset": 0, 00:14:32.664 "data_size": 63488 00:14:32.664 }, 00:14:32.664 { 00:14:32.664 "name": "BaseBdev2", 00:14:32.664 "uuid": "3607dc0d-eafb-5626-856b-2c18a23c739f", 00:14:32.664 "is_configured": true, 00:14:32.664 "data_offset": 2048, 00:14:32.664 "data_size": 63488 00:14:32.664 } 00:14:32.664 ] 00:14:32.664 }' 00:14:32.664 21:41:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:32.664 21:41:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.232 21:41:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:33.232 21:41:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.232 21:41:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.232 [2024-12-10 21:41:33.816758] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:33.232 [2024-12-10 21:41:33.816833] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:33.232 [2024-12-10 21:41:33.816859] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:14:33.232 [2024-12-10 21:41:33.816871] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:33.232 [2024-12-10 21:41:33.817389] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:33.232 [2024-12-10 21:41:33.817433] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:33.232 [2024-12-10 21:41:33.817547] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:33.232 [2024-12-10 21:41:33.817564] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:14:33.232 [2024-12-10 21:41:33.817575] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:33.232 [2024-12-10 21:41:33.817603] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:33.232 [2024-12-10 21:41:33.835458] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:14:33.232 spare 00:14:33.232 21:41:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.232 21:41:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:14:33.232 [2024-12-10 21:41:33.837560] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:34.170 21:41:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:34.170 21:41:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:34.170 21:41:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:34.170 21:41:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:34.170 21:41:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:34.170 21:41:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:34.170 21:41:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.170 21:41:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:34.170 21:41:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.170 21:41:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.170 21:41:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:34.170 "name": "raid_bdev1", 00:14:34.170 "uuid": "36a8878b-7a18-4587-a063-5700a2f340bf", 00:14:34.170 "strip_size_kb": 0, 00:14:34.170 "state": "online", 00:14:34.170 "raid_level": "raid1", 00:14:34.170 "superblock": true, 00:14:34.170 "num_base_bdevs": 2, 00:14:34.170 "num_base_bdevs_discovered": 2, 00:14:34.170 "num_base_bdevs_operational": 2, 00:14:34.170 "process": { 00:14:34.170 "type": "rebuild", 00:14:34.170 "target": "spare", 00:14:34.170 "progress": { 00:14:34.170 "blocks": 20480, 00:14:34.170 "percent": 32 00:14:34.170 } 00:14:34.170 }, 00:14:34.170 "base_bdevs_list": [ 00:14:34.170 { 00:14:34.170 "name": "spare", 00:14:34.170 "uuid": "224f1d6c-c92a-51cc-b44c-ad6b00f89f77", 00:14:34.170 "is_configured": true, 00:14:34.170 "data_offset": 2048, 00:14:34.170 "data_size": 63488 00:14:34.170 }, 00:14:34.170 { 00:14:34.170 "name": "BaseBdev2", 00:14:34.170 "uuid": "3607dc0d-eafb-5626-856b-2c18a23c739f", 00:14:34.170 "is_configured": true, 00:14:34.170 "data_offset": 2048, 00:14:34.170 "data_size": 63488 00:14:34.170 } 00:14:34.170 ] 00:14:34.170 }' 00:14:34.170 21:41:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:34.170 21:41:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:34.170 21:41:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:34.429 21:41:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:34.429 21:41:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:14:34.429 21:41:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.429 21:41:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.429 [2024-12-10 21:41:34.985059] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:34.429 [2024-12-10 21:41:35.043310] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:34.429 [2024-12-10 21:41:35.043388] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:34.429 [2024-12-10 21:41:35.043406] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:34.429 [2024-12-10 21:41:35.043414] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:34.429 21:41:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.429 21:41:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:34.429 21:41:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:34.429 21:41:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:34.429 21:41:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:34.429 21:41:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:34.429 21:41:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:34.430 21:41:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:34.430 21:41:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:34.430 21:41:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:34.430 21:41:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:34.430 21:41:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:34.430 21:41:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:34.430 21:41:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.430 21:41:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.430 21:41:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.430 21:41:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:34.430 "name": "raid_bdev1", 00:14:34.430 "uuid": "36a8878b-7a18-4587-a063-5700a2f340bf", 00:14:34.430 "strip_size_kb": 0, 00:14:34.430 "state": "online", 00:14:34.430 "raid_level": "raid1", 00:14:34.430 "superblock": true, 00:14:34.430 "num_base_bdevs": 2, 00:14:34.430 "num_base_bdevs_discovered": 1, 00:14:34.430 "num_base_bdevs_operational": 1, 00:14:34.430 "base_bdevs_list": [ 00:14:34.430 { 00:14:34.430 "name": null, 00:14:34.430 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:34.430 "is_configured": false, 00:14:34.430 "data_offset": 0, 00:14:34.430 "data_size": 63488 00:14:34.430 }, 00:14:34.430 { 00:14:34.430 "name": "BaseBdev2", 00:14:34.430 "uuid": "3607dc0d-eafb-5626-856b-2c18a23c739f", 00:14:34.430 "is_configured": true, 00:14:34.430 "data_offset": 2048, 00:14:34.430 "data_size": 63488 00:14:34.430 } 00:14:34.430 ] 00:14:34.430 }' 00:14:34.430 21:41:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:34.430 21:41:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.997 21:41:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:34.997 21:41:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:34.997 21:41:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:34.997 21:41:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:34.997 21:41:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:34.997 21:41:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:34.997 21:41:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.997 21:41:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:34.997 21:41:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.997 21:41:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.997 21:41:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:34.997 "name": "raid_bdev1", 00:14:34.997 "uuid": "36a8878b-7a18-4587-a063-5700a2f340bf", 00:14:34.997 "strip_size_kb": 0, 00:14:34.997 "state": "online", 00:14:34.997 "raid_level": "raid1", 00:14:34.997 "superblock": true, 00:14:34.997 "num_base_bdevs": 2, 00:14:34.997 "num_base_bdevs_discovered": 1, 00:14:34.997 "num_base_bdevs_operational": 1, 00:14:34.997 "base_bdevs_list": [ 00:14:34.997 { 00:14:34.997 "name": null, 00:14:34.997 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:34.997 "is_configured": false, 00:14:34.997 "data_offset": 0, 00:14:34.997 "data_size": 63488 00:14:34.997 }, 00:14:34.997 { 00:14:34.997 "name": "BaseBdev2", 00:14:34.997 "uuid": "3607dc0d-eafb-5626-856b-2c18a23c739f", 00:14:34.997 "is_configured": true, 00:14:34.997 "data_offset": 2048, 00:14:34.997 "data_size": 63488 00:14:34.997 } 00:14:34.997 ] 00:14:34.997 }' 00:14:34.997 21:41:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:34.997 21:41:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:34.997 21:41:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:34.997 21:41:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:34.997 21:41:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:14:34.997 21:41:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.997 21:41:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.997 21:41:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.997 21:41:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:34.997 21:41:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.997 21:41:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.997 [2024-12-10 21:41:35.675759] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:34.997 [2024-12-10 21:41:35.675867] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:34.997 [2024-12-10 21:41:35.675940] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:14:34.997 [2024-12-10 21:41:35.675988] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:34.997 [2024-12-10 21:41:35.676512] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:34.997 [2024-12-10 21:41:35.676574] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:34.997 [2024-12-10 21:41:35.676696] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:14:34.997 [2024-12-10 21:41:35.676742] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:14:34.997 [2024-12-10 21:41:35.676787] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:34.997 [2024-12-10 21:41:35.676816] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:14:34.997 BaseBdev1 00:14:34.997 21:41:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.997 21:41:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:14:35.936 21:41:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:35.936 21:41:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:35.936 21:41:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:35.936 21:41:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:35.936 21:41:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:35.936 21:41:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:35.936 21:41:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:35.936 21:41:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:35.936 21:41:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:35.936 21:41:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:35.936 21:41:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:35.936 21:41:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.936 21:41:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:35.936 21:41:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:35.936 21:41:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.195 21:41:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:36.195 "name": "raid_bdev1", 00:14:36.195 "uuid": "36a8878b-7a18-4587-a063-5700a2f340bf", 00:14:36.195 "strip_size_kb": 0, 00:14:36.195 "state": "online", 00:14:36.195 "raid_level": "raid1", 00:14:36.195 "superblock": true, 00:14:36.195 "num_base_bdevs": 2, 00:14:36.195 "num_base_bdevs_discovered": 1, 00:14:36.195 "num_base_bdevs_operational": 1, 00:14:36.195 "base_bdevs_list": [ 00:14:36.195 { 00:14:36.195 "name": null, 00:14:36.195 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:36.195 "is_configured": false, 00:14:36.195 "data_offset": 0, 00:14:36.195 "data_size": 63488 00:14:36.195 }, 00:14:36.195 { 00:14:36.195 "name": "BaseBdev2", 00:14:36.195 "uuid": "3607dc0d-eafb-5626-856b-2c18a23c739f", 00:14:36.195 "is_configured": true, 00:14:36.195 "data_offset": 2048, 00:14:36.195 "data_size": 63488 00:14:36.195 } 00:14:36.195 ] 00:14:36.195 }' 00:14:36.195 21:41:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:36.195 21:41:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:36.455 21:41:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:36.455 21:41:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:36.455 21:41:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:36.455 21:41:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:36.455 21:41:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:36.455 21:41:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:36.455 21:41:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:36.455 21:41:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.455 21:41:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:36.455 21:41:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.455 21:41:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:36.455 "name": "raid_bdev1", 00:14:36.455 "uuid": "36a8878b-7a18-4587-a063-5700a2f340bf", 00:14:36.455 "strip_size_kb": 0, 00:14:36.455 "state": "online", 00:14:36.455 "raid_level": "raid1", 00:14:36.455 "superblock": true, 00:14:36.455 "num_base_bdevs": 2, 00:14:36.455 "num_base_bdevs_discovered": 1, 00:14:36.455 "num_base_bdevs_operational": 1, 00:14:36.455 "base_bdevs_list": [ 00:14:36.455 { 00:14:36.455 "name": null, 00:14:36.455 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:36.455 "is_configured": false, 00:14:36.455 "data_offset": 0, 00:14:36.455 "data_size": 63488 00:14:36.455 }, 00:14:36.455 { 00:14:36.455 "name": "BaseBdev2", 00:14:36.455 "uuid": "3607dc0d-eafb-5626-856b-2c18a23c739f", 00:14:36.455 "is_configured": true, 00:14:36.455 "data_offset": 2048, 00:14:36.455 "data_size": 63488 00:14:36.455 } 00:14:36.455 ] 00:14:36.455 }' 00:14:36.455 21:41:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:36.715 21:41:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:36.715 21:41:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:36.715 21:41:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:36.715 21:41:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:36.715 21:41:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:14:36.715 21:41:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:36.715 21:41:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:14:36.715 21:41:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:36.715 21:41:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:14:36.715 21:41:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:36.715 21:41:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:36.715 21:41:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.715 21:41:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:36.715 [2024-12-10 21:41:37.313084] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:36.715 [2024-12-10 21:41:37.313251] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:14:36.715 [2024-12-10 21:41:37.313271] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:36.715 request: 00:14:36.715 { 00:14:36.715 "base_bdev": "BaseBdev1", 00:14:36.715 "raid_bdev": "raid_bdev1", 00:14:36.715 "method": "bdev_raid_add_base_bdev", 00:14:36.715 "req_id": 1 00:14:36.715 } 00:14:36.715 Got JSON-RPC error response 00:14:36.715 response: 00:14:36.715 { 00:14:36.715 "code": -22, 00:14:36.715 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:14:36.715 } 00:14:36.715 21:41:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:14:36.715 21:41:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:14:36.715 21:41:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:36.715 21:41:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:36.715 21:41:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:36.715 21:41:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:14:37.655 21:41:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:37.655 21:41:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:37.655 21:41:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:37.655 21:41:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:37.655 21:41:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:37.655 21:41:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:37.655 21:41:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:37.655 21:41:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:37.655 21:41:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:37.655 21:41:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:37.655 21:41:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:37.655 21:41:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:37.655 21:41:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.655 21:41:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:37.655 21:41:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.655 21:41:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:37.655 "name": "raid_bdev1", 00:14:37.655 "uuid": "36a8878b-7a18-4587-a063-5700a2f340bf", 00:14:37.655 "strip_size_kb": 0, 00:14:37.655 "state": "online", 00:14:37.655 "raid_level": "raid1", 00:14:37.655 "superblock": true, 00:14:37.655 "num_base_bdevs": 2, 00:14:37.655 "num_base_bdevs_discovered": 1, 00:14:37.655 "num_base_bdevs_operational": 1, 00:14:37.655 "base_bdevs_list": [ 00:14:37.655 { 00:14:37.655 "name": null, 00:14:37.655 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:37.655 "is_configured": false, 00:14:37.655 "data_offset": 0, 00:14:37.655 "data_size": 63488 00:14:37.655 }, 00:14:37.655 { 00:14:37.655 "name": "BaseBdev2", 00:14:37.655 "uuid": "3607dc0d-eafb-5626-856b-2c18a23c739f", 00:14:37.655 "is_configured": true, 00:14:37.655 "data_offset": 2048, 00:14:37.655 "data_size": 63488 00:14:37.655 } 00:14:37.655 ] 00:14:37.655 }' 00:14:37.655 21:41:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:37.655 21:41:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:38.224 21:41:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:38.224 21:41:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:38.224 21:41:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:38.224 21:41:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:38.224 21:41:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:38.224 21:41:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:38.224 21:41:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:38.224 21:41:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.224 21:41:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:38.224 21:41:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.224 21:41:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:38.224 "name": "raid_bdev1", 00:14:38.224 "uuid": "36a8878b-7a18-4587-a063-5700a2f340bf", 00:14:38.224 "strip_size_kb": 0, 00:14:38.224 "state": "online", 00:14:38.224 "raid_level": "raid1", 00:14:38.224 "superblock": true, 00:14:38.224 "num_base_bdevs": 2, 00:14:38.224 "num_base_bdevs_discovered": 1, 00:14:38.224 "num_base_bdevs_operational": 1, 00:14:38.224 "base_bdevs_list": [ 00:14:38.224 { 00:14:38.224 "name": null, 00:14:38.224 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:38.224 "is_configured": false, 00:14:38.224 "data_offset": 0, 00:14:38.224 "data_size": 63488 00:14:38.224 }, 00:14:38.224 { 00:14:38.224 "name": "BaseBdev2", 00:14:38.224 "uuid": "3607dc0d-eafb-5626-856b-2c18a23c739f", 00:14:38.224 "is_configured": true, 00:14:38.224 "data_offset": 2048, 00:14:38.224 "data_size": 63488 00:14:38.224 } 00:14:38.224 ] 00:14:38.224 }' 00:14:38.224 21:41:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:38.224 21:41:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:38.224 21:41:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:38.224 21:41:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:38.224 21:41:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 75888 00:14:38.224 21:41:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 75888 ']' 00:14:38.224 21:41:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 75888 00:14:38.224 21:41:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:14:38.224 21:41:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:38.224 21:41:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75888 00:14:38.224 21:41:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:38.224 21:41:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:38.224 killing process with pid 75888 00:14:38.224 21:41:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75888' 00:14:38.224 21:41:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 75888 00:14:38.224 Received shutdown signal, test time was about 60.000000 seconds 00:14:38.224 00:14:38.224 Latency(us) 00:14:38.224 [2024-12-10T21:41:39.007Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:38.224 [2024-12-10T21:41:39.007Z] =================================================================================================================== 00:14:38.224 [2024-12-10T21:41:39.007Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:38.224 [2024-12-10 21:41:38.954773] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:38.224 21:41:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 75888 00:14:38.224 [2024-12-10 21:41:38.954970] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:38.224 [2024-12-10 21:41:38.955030] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:38.224 [2024-12-10 21:41:38.955042] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:14:38.794 [2024-12-10 21:41:39.275228] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:39.731 21:41:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:14:39.731 00:14:39.731 real 0m23.920s 00:14:39.731 user 0m29.424s 00:14:39.731 sys 0m3.794s 00:14:39.731 21:41:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:39.731 21:41:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:39.731 ************************************ 00:14:39.731 END TEST raid_rebuild_test_sb 00:14:39.731 ************************************ 00:14:39.731 21:41:40 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 2 false true true 00:14:39.731 21:41:40 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:14:39.731 21:41:40 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:39.731 21:41:40 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:39.731 ************************************ 00:14:39.731 START TEST raid_rebuild_test_io 00:14:39.731 ************************************ 00:14:39.731 21:41:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 false true true 00:14:39.731 21:41:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:14:39.731 21:41:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:14:39.731 21:41:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:14:39.731 21:41:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:14:39.731 21:41:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:39.731 21:41:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:39.731 21:41:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:39.731 21:41:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:39.731 21:41:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:39.731 21:41:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:39.731 21:41:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:39.731 21:41:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:39.731 21:41:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:39.731 21:41:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:14:39.731 21:41:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:39.731 21:41:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:39.731 21:41:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:39.731 21:41:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:39.731 21:41:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:39.731 21:41:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:39.731 21:41:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:14:39.731 21:41:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:14:39.731 21:41:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:14:39.731 21:41:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=76628 00:14:39.731 21:41:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:39.731 21:41:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 76628 00:14:39.731 21:41:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # '[' -z 76628 ']' 00:14:39.731 21:41:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:39.731 21:41:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:39.731 21:41:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:39.731 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:39.731 21:41:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:39.731 21:41:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:39.990 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:39.991 Zero copy mechanism will not be used. 00:14:39.991 [2024-12-10 21:41:40.588899] Starting SPDK v25.01-pre git sha1 cec5ba284 / DPDK 24.03.0 initialization... 00:14:39.991 [2024-12-10 21:41:40.589105] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76628 ] 00:14:39.991 [2024-12-10 21:41:40.766188] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:40.250 [2024-12-10 21:41:40.883920] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:14:40.509 [2024-12-10 21:41:41.093308] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:40.509 [2024-12-10 21:41:41.093453] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:40.769 21:41:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:40.770 21:41:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # return 0 00:14:40.770 21:41:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:40.770 21:41:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:40.770 21:41:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.770 21:41:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:40.770 BaseBdev1_malloc 00:14:40.770 21:41:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.770 21:41:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:40.770 21:41:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.770 21:41:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:40.770 [2024-12-10 21:41:41.485323] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:40.770 [2024-12-10 21:41:41.485464] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:40.770 [2024-12-10 21:41:41.485513] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:40.770 [2024-12-10 21:41:41.485553] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:40.770 [2024-12-10 21:41:41.487760] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:40.770 [2024-12-10 21:41:41.487865] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:40.770 BaseBdev1 00:14:40.770 21:41:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.770 21:41:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:40.770 21:41:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:40.770 21:41:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.770 21:41:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:40.770 BaseBdev2_malloc 00:14:40.770 21:41:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.770 21:41:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:40.770 21:41:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.770 21:41:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:40.770 [2024-12-10 21:41:41.539926] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:40.770 [2024-12-10 21:41:41.539992] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:40.770 [2024-12-10 21:41:41.540015] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:40.770 [2024-12-10 21:41:41.540029] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:40.770 [2024-12-10 21:41:41.542238] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:40.770 [2024-12-10 21:41:41.542280] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:40.770 BaseBdev2 00:14:40.770 21:41:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.770 21:41:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:14:40.770 21:41:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.770 21:41:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:41.029 spare_malloc 00:14:41.029 21:41:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.029 21:41:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:41.029 21:41:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.029 21:41:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:41.029 spare_delay 00:14:41.029 21:41:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.029 21:41:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:41.029 21:41:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.029 21:41:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:41.029 [2024-12-10 21:41:41.617571] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:41.029 [2024-12-10 21:41:41.617629] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:41.029 [2024-12-10 21:41:41.617651] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:14:41.029 [2024-12-10 21:41:41.617661] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:41.029 [2024-12-10 21:41:41.619914] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:41.029 [2024-12-10 21:41:41.620024] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:41.029 spare 00:14:41.029 21:41:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.029 21:41:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:14:41.029 21:41:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.029 21:41:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:41.029 [2024-12-10 21:41:41.629603] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:41.029 [2024-12-10 21:41:41.631482] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:41.029 [2024-12-10 21:41:41.631580] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:41.029 [2024-12-10 21:41:41.631596] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:14:41.029 [2024-12-10 21:41:41.631881] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:14:41.029 [2024-12-10 21:41:41.632069] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:41.030 [2024-12-10 21:41:41.632081] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:41.030 [2024-12-10 21:41:41.632262] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:41.030 21:41:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.030 21:41:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:41.030 21:41:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:41.030 21:41:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:41.030 21:41:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:41.030 21:41:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:41.030 21:41:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:41.030 21:41:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:41.030 21:41:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:41.030 21:41:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:41.030 21:41:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:41.030 21:41:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:41.030 21:41:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:41.030 21:41:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.030 21:41:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:41.030 21:41:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.030 21:41:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:41.030 "name": "raid_bdev1", 00:14:41.030 "uuid": "9654d6fd-2469-42fa-a976-b68bf2e18c0a", 00:14:41.030 "strip_size_kb": 0, 00:14:41.030 "state": "online", 00:14:41.030 "raid_level": "raid1", 00:14:41.030 "superblock": false, 00:14:41.030 "num_base_bdevs": 2, 00:14:41.030 "num_base_bdevs_discovered": 2, 00:14:41.030 "num_base_bdevs_operational": 2, 00:14:41.030 "base_bdevs_list": [ 00:14:41.030 { 00:14:41.030 "name": "BaseBdev1", 00:14:41.030 "uuid": "91236b43-300b-5740-8c98-daf2b1e0b1ac", 00:14:41.030 "is_configured": true, 00:14:41.030 "data_offset": 0, 00:14:41.030 "data_size": 65536 00:14:41.030 }, 00:14:41.030 { 00:14:41.030 "name": "BaseBdev2", 00:14:41.030 "uuid": "9adf5e05-7c7a-537c-97dc-1444b3cbbf25", 00:14:41.030 "is_configured": true, 00:14:41.030 "data_offset": 0, 00:14:41.030 "data_size": 65536 00:14:41.030 } 00:14:41.030 ] 00:14:41.030 }' 00:14:41.030 21:41:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:41.030 21:41:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:41.599 21:41:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:41.599 21:41:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.599 21:41:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:41.599 21:41:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:41.599 [2024-12-10 21:41:42.089164] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:41.599 21:41:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.599 21:41:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:14:41.599 21:41:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:41.599 21:41:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:41.599 21:41:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.599 21:41:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:41.599 21:41:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.599 21:41:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:14:41.599 21:41:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:14:41.599 21:41:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:41.599 21:41:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:14:41.599 21:41:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.599 21:41:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:41.599 [2024-12-10 21:41:42.192683] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:41.599 21:41:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.599 21:41:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:41.599 21:41:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:41.599 21:41:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:41.599 21:41:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:41.599 21:41:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:41.599 21:41:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:41.599 21:41:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:41.599 21:41:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:41.599 21:41:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:41.599 21:41:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:41.599 21:41:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:41.599 21:41:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:41.599 21:41:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.599 21:41:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:41.599 21:41:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.599 21:41:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:41.599 "name": "raid_bdev1", 00:14:41.599 "uuid": "9654d6fd-2469-42fa-a976-b68bf2e18c0a", 00:14:41.599 "strip_size_kb": 0, 00:14:41.599 "state": "online", 00:14:41.599 "raid_level": "raid1", 00:14:41.599 "superblock": false, 00:14:41.599 "num_base_bdevs": 2, 00:14:41.599 "num_base_bdevs_discovered": 1, 00:14:41.599 "num_base_bdevs_operational": 1, 00:14:41.599 "base_bdevs_list": [ 00:14:41.599 { 00:14:41.599 "name": null, 00:14:41.599 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:41.599 "is_configured": false, 00:14:41.599 "data_offset": 0, 00:14:41.599 "data_size": 65536 00:14:41.599 }, 00:14:41.599 { 00:14:41.599 "name": "BaseBdev2", 00:14:41.599 "uuid": "9adf5e05-7c7a-537c-97dc-1444b3cbbf25", 00:14:41.599 "is_configured": true, 00:14:41.599 "data_offset": 0, 00:14:41.599 "data_size": 65536 00:14:41.599 } 00:14:41.599 ] 00:14:41.599 }' 00:14:41.599 21:41:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:41.599 21:41:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:41.599 [2024-12-10 21:41:42.292860] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:14:41.599 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:41.599 Zero copy mechanism will not be used. 00:14:41.599 Running I/O for 60 seconds... 00:14:42.169 21:41:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:42.169 21:41:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.169 21:41:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:42.169 [2024-12-10 21:41:42.672103] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:42.169 21:41:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.169 21:41:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:42.169 [2024-12-10 21:41:42.747696] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:14:42.169 [2024-12-10 21:41:42.749847] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:42.169 [2024-12-10 21:41:42.870637] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:42.169 [2024-12-10 21:41:42.871343] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:42.427 [2024-12-10 21:41:43.081257] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:42.427 [2024-12-10 21:41:43.081713] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:42.688 168.00 IOPS, 504.00 MiB/s [2024-12-10T21:41:43.471Z] [2024-12-10 21:41:43.428376] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:14:42.953 [2024-12-10 21:41:43.638756] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:42.953 21:41:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:42.953 21:41:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:42.953 21:41:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:42.953 21:41:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:42.953 21:41:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:42.953 21:41:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:42.953 21:41:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:42.953 21:41:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.953 21:41:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:43.211 21:41:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.211 21:41:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:43.211 "name": "raid_bdev1", 00:14:43.211 "uuid": "9654d6fd-2469-42fa-a976-b68bf2e18c0a", 00:14:43.211 "strip_size_kb": 0, 00:14:43.211 "state": "online", 00:14:43.211 "raid_level": "raid1", 00:14:43.211 "superblock": false, 00:14:43.211 "num_base_bdevs": 2, 00:14:43.211 "num_base_bdevs_discovered": 2, 00:14:43.211 "num_base_bdevs_operational": 2, 00:14:43.211 "process": { 00:14:43.211 "type": "rebuild", 00:14:43.211 "target": "spare", 00:14:43.211 "progress": { 00:14:43.211 "blocks": 10240, 00:14:43.211 "percent": 15 00:14:43.211 } 00:14:43.211 }, 00:14:43.211 "base_bdevs_list": [ 00:14:43.211 { 00:14:43.211 "name": "spare", 00:14:43.211 "uuid": "4187b752-0c46-53fe-bb9e-1e7dd1ee2c24", 00:14:43.211 "is_configured": true, 00:14:43.211 "data_offset": 0, 00:14:43.211 "data_size": 65536 00:14:43.211 }, 00:14:43.211 { 00:14:43.211 "name": "BaseBdev2", 00:14:43.211 "uuid": "9adf5e05-7c7a-537c-97dc-1444b3cbbf25", 00:14:43.211 "is_configured": true, 00:14:43.211 "data_offset": 0, 00:14:43.211 "data_size": 65536 00:14:43.211 } 00:14:43.211 ] 00:14:43.211 }' 00:14:43.211 21:41:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:43.211 21:41:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:43.211 21:41:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:43.211 21:41:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:43.211 21:41:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:43.211 21:41:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.211 21:41:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:43.211 [2024-12-10 21:41:43.852127] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:43.211 [2024-12-10 21:41:43.867100] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:14:43.211 [2024-12-10 21:41:43.867777] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:14:43.211 [2024-12-10 21:41:43.975241] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:43.211 [2024-12-10 21:41:43.990423] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:43.211 [2024-12-10 21:41:43.990505] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:43.211 [2024-12-10 21:41:43.990522] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:43.469 [2024-12-10 21:41:44.029860] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:14:43.469 21:41:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.469 21:41:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:43.469 21:41:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:43.469 21:41:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:43.469 21:41:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:43.469 21:41:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:43.469 21:41:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:43.469 21:41:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:43.469 21:41:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:43.469 21:41:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:43.469 21:41:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:43.469 21:41:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:43.469 21:41:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:43.469 21:41:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.469 21:41:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:43.469 21:41:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.469 21:41:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:43.469 "name": "raid_bdev1", 00:14:43.469 "uuid": "9654d6fd-2469-42fa-a976-b68bf2e18c0a", 00:14:43.469 "strip_size_kb": 0, 00:14:43.469 "state": "online", 00:14:43.469 "raid_level": "raid1", 00:14:43.469 "superblock": false, 00:14:43.469 "num_base_bdevs": 2, 00:14:43.469 "num_base_bdevs_discovered": 1, 00:14:43.469 "num_base_bdevs_operational": 1, 00:14:43.469 "base_bdevs_list": [ 00:14:43.469 { 00:14:43.469 "name": null, 00:14:43.469 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:43.469 "is_configured": false, 00:14:43.469 "data_offset": 0, 00:14:43.469 "data_size": 65536 00:14:43.469 }, 00:14:43.469 { 00:14:43.469 "name": "BaseBdev2", 00:14:43.469 "uuid": "9adf5e05-7c7a-537c-97dc-1444b3cbbf25", 00:14:43.469 "is_configured": true, 00:14:43.469 "data_offset": 0, 00:14:43.469 "data_size": 65536 00:14:43.469 } 00:14:43.469 ] 00:14:43.469 }' 00:14:43.469 21:41:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:43.469 21:41:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:43.727 164.50 IOPS, 493.50 MiB/s [2024-12-10T21:41:44.510Z] 21:41:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:43.727 21:41:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:43.727 21:41:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:43.727 21:41:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:43.727 21:41:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:43.727 21:41:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:43.727 21:41:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:43.727 21:41:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.727 21:41:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:43.984 21:41:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.984 21:41:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:43.984 "name": "raid_bdev1", 00:14:43.984 "uuid": "9654d6fd-2469-42fa-a976-b68bf2e18c0a", 00:14:43.984 "strip_size_kb": 0, 00:14:43.984 "state": "online", 00:14:43.984 "raid_level": "raid1", 00:14:43.984 "superblock": false, 00:14:43.984 "num_base_bdevs": 2, 00:14:43.984 "num_base_bdevs_discovered": 1, 00:14:43.984 "num_base_bdevs_operational": 1, 00:14:43.984 "base_bdevs_list": [ 00:14:43.984 { 00:14:43.984 "name": null, 00:14:43.984 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:43.984 "is_configured": false, 00:14:43.984 "data_offset": 0, 00:14:43.984 "data_size": 65536 00:14:43.984 }, 00:14:43.984 { 00:14:43.984 "name": "BaseBdev2", 00:14:43.984 "uuid": "9adf5e05-7c7a-537c-97dc-1444b3cbbf25", 00:14:43.984 "is_configured": true, 00:14:43.984 "data_offset": 0, 00:14:43.984 "data_size": 65536 00:14:43.984 } 00:14:43.984 ] 00:14:43.984 }' 00:14:43.984 21:41:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:43.984 21:41:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:43.984 21:41:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:43.984 21:41:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:43.984 21:41:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:43.984 21:41:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.984 21:41:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:43.984 [2024-12-10 21:41:44.645116] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:43.984 21:41:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.984 21:41:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:43.984 [2024-12-10 21:41:44.693227] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:14:43.984 [2024-12-10 21:41:44.695274] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:44.241 [2024-12-10 21:41:44.809859] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:44.241 [2024-12-10 21:41:44.810505] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:44.497 [2024-12-10 21:41:45.025043] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:44.497 [2024-12-10 21:41:45.025378] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:44.755 178.33 IOPS, 535.00 MiB/s [2024-12-10T21:41:45.538Z] [2024-12-10 21:41:45.362845] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:14:44.755 [2024-12-10 21:41:45.484864] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:45.012 21:41:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:45.012 21:41:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:45.012 21:41:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:45.012 21:41:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:45.012 21:41:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:45.012 21:41:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:45.012 21:41:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:45.012 21:41:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.012 21:41:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:45.012 21:41:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.012 21:41:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:45.012 "name": "raid_bdev1", 00:14:45.012 "uuid": "9654d6fd-2469-42fa-a976-b68bf2e18c0a", 00:14:45.012 "strip_size_kb": 0, 00:14:45.012 "state": "online", 00:14:45.012 "raid_level": "raid1", 00:14:45.012 "superblock": false, 00:14:45.012 "num_base_bdevs": 2, 00:14:45.012 "num_base_bdevs_discovered": 2, 00:14:45.012 "num_base_bdevs_operational": 2, 00:14:45.012 "process": { 00:14:45.012 "type": "rebuild", 00:14:45.012 "target": "spare", 00:14:45.012 "progress": { 00:14:45.012 "blocks": 12288, 00:14:45.012 "percent": 18 00:14:45.012 } 00:14:45.012 }, 00:14:45.012 "base_bdevs_list": [ 00:14:45.012 { 00:14:45.012 "name": "spare", 00:14:45.012 "uuid": "4187b752-0c46-53fe-bb9e-1e7dd1ee2c24", 00:14:45.012 "is_configured": true, 00:14:45.012 "data_offset": 0, 00:14:45.012 "data_size": 65536 00:14:45.012 }, 00:14:45.012 { 00:14:45.012 "name": "BaseBdev2", 00:14:45.012 "uuid": "9adf5e05-7c7a-537c-97dc-1444b3cbbf25", 00:14:45.012 "is_configured": true, 00:14:45.012 "data_offset": 0, 00:14:45.012 "data_size": 65536 00:14:45.012 } 00:14:45.012 ] 00:14:45.012 }' 00:14:45.013 21:41:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:45.013 21:41:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:45.270 21:41:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:45.270 21:41:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:45.270 21:41:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:14:45.270 21:41:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:14:45.270 21:41:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:14:45.270 21:41:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:14:45.270 21:41:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=417 00:14:45.270 21:41:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:45.270 21:41:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:45.270 21:41:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:45.270 21:41:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:45.270 21:41:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:45.270 21:41:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:45.270 21:41:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:45.270 21:41:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.270 21:41:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:45.270 [2024-12-10 21:41:45.830750] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:14:45.270 21:41:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:45.270 [2024-12-10 21:41:45.837525] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:14:45.270 21:41:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.270 21:41:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:45.270 "name": "raid_bdev1", 00:14:45.270 "uuid": "9654d6fd-2469-42fa-a976-b68bf2e18c0a", 00:14:45.270 "strip_size_kb": 0, 00:14:45.270 "state": "online", 00:14:45.270 "raid_level": "raid1", 00:14:45.270 "superblock": false, 00:14:45.270 "num_base_bdevs": 2, 00:14:45.270 "num_base_bdevs_discovered": 2, 00:14:45.270 "num_base_bdevs_operational": 2, 00:14:45.270 "process": { 00:14:45.270 "type": "rebuild", 00:14:45.270 "target": "spare", 00:14:45.270 "progress": { 00:14:45.270 "blocks": 16384, 00:14:45.270 "percent": 25 00:14:45.270 } 00:14:45.270 }, 00:14:45.270 "base_bdevs_list": [ 00:14:45.270 { 00:14:45.270 "name": "spare", 00:14:45.270 "uuid": "4187b752-0c46-53fe-bb9e-1e7dd1ee2c24", 00:14:45.270 "is_configured": true, 00:14:45.270 "data_offset": 0, 00:14:45.270 "data_size": 65536 00:14:45.270 }, 00:14:45.270 { 00:14:45.270 "name": "BaseBdev2", 00:14:45.270 "uuid": "9adf5e05-7c7a-537c-97dc-1444b3cbbf25", 00:14:45.270 "is_configured": true, 00:14:45.270 "data_offset": 0, 00:14:45.270 "data_size": 65536 00:14:45.270 } 00:14:45.270 ] 00:14:45.270 }' 00:14:45.270 21:41:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:45.270 21:41:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:45.270 21:41:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:45.270 21:41:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:45.270 21:41:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:45.529 [2024-12-10 21:41:46.170082] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:14:45.529 [2024-12-10 21:41:46.170705] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:14:45.787 165.25 IOPS, 495.75 MiB/s [2024-12-10T21:41:46.570Z] [2024-12-10 21:41:46.415277] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:14:46.045 [2024-12-10 21:41:46.747125] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:14:46.045 [2024-12-10 21:41:46.747658] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:14:46.304 [2024-12-10 21:41:46.974056] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:14:46.304 21:41:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:46.304 21:41:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:46.304 21:41:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:46.304 21:41:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:46.304 21:41:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:46.304 21:41:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:46.304 21:41:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:46.304 21:41:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:46.304 21:41:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.304 21:41:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:46.304 21:41:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.304 21:41:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:46.304 "name": "raid_bdev1", 00:14:46.304 "uuid": "9654d6fd-2469-42fa-a976-b68bf2e18c0a", 00:14:46.304 "strip_size_kb": 0, 00:14:46.304 "state": "online", 00:14:46.304 "raid_level": "raid1", 00:14:46.304 "superblock": false, 00:14:46.304 "num_base_bdevs": 2, 00:14:46.304 "num_base_bdevs_discovered": 2, 00:14:46.304 "num_base_bdevs_operational": 2, 00:14:46.304 "process": { 00:14:46.304 "type": "rebuild", 00:14:46.304 "target": "spare", 00:14:46.304 "progress": { 00:14:46.304 "blocks": 28672, 00:14:46.304 "percent": 43 00:14:46.304 } 00:14:46.304 }, 00:14:46.304 "base_bdevs_list": [ 00:14:46.304 { 00:14:46.304 "name": "spare", 00:14:46.304 "uuid": "4187b752-0c46-53fe-bb9e-1e7dd1ee2c24", 00:14:46.304 "is_configured": true, 00:14:46.304 "data_offset": 0, 00:14:46.304 "data_size": 65536 00:14:46.304 }, 00:14:46.304 { 00:14:46.304 "name": "BaseBdev2", 00:14:46.304 "uuid": "9adf5e05-7c7a-537c-97dc-1444b3cbbf25", 00:14:46.304 "is_configured": true, 00:14:46.304 "data_offset": 0, 00:14:46.304 "data_size": 65536 00:14:46.304 } 00:14:46.304 ] 00:14:46.304 }' 00:14:46.304 21:41:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:46.304 21:41:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:46.304 21:41:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:46.562 21:41:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:46.562 21:41:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:46.562 [2024-12-10 21:41:47.313126] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:14:46.562 [2024-12-10 21:41:47.313852] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:14:47.128 140.40 IOPS, 421.20 MiB/s [2024-12-10T21:41:47.911Z] [2024-12-10 21:41:47.660601] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:14:47.128 [2024-12-10 21:41:47.877207] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:14:47.387 21:41:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:47.387 21:41:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:47.387 21:41:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:47.387 21:41:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:47.387 21:41:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:47.387 21:41:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:47.387 21:41:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:47.387 21:41:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.387 21:41:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:47.387 21:41:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:47.387 21:41:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.646 21:41:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:47.646 "name": "raid_bdev1", 00:14:47.646 "uuid": "9654d6fd-2469-42fa-a976-b68bf2e18c0a", 00:14:47.646 "strip_size_kb": 0, 00:14:47.646 "state": "online", 00:14:47.646 "raid_level": "raid1", 00:14:47.646 "superblock": false, 00:14:47.646 "num_base_bdevs": 2, 00:14:47.646 "num_base_bdevs_discovered": 2, 00:14:47.646 "num_base_bdevs_operational": 2, 00:14:47.646 "process": { 00:14:47.646 "type": "rebuild", 00:14:47.646 "target": "spare", 00:14:47.646 "progress": { 00:14:47.646 "blocks": 43008, 00:14:47.646 "percent": 65 00:14:47.646 } 00:14:47.646 }, 00:14:47.646 "base_bdevs_list": [ 00:14:47.646 { 00:14:47.646 "name": "spare", 00:14:47.646 "uuid": "4187b752-0c46-53fe-bb9e-1e7dd1ee2c24", 00:14:47.646 "is_configured": true, 00:14:47.646 "data_offset": 0, 00:14:47.646 "data_size": 65536 00:14:47.646 }, 00:14:47.646 { 00:14:47.646 "name": "BaseBdev2", 00:14:47.646 "uuid": "9adf5e05-7c7a-537c-97dc-1444b3cbbf25", 00:14:47.646 "is_configured": true, 00:14:47.646 "data_offset": 0, 00:14:47.646 "data_size": 65536 00:14:47.646 } 00:14:47.646 ] 00:14:47.646 }' 00:14:47.646 21:41:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:47.646 [2024-12-10 21:41:48.201405] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:14:47.646 21:41:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:47.646 21:41:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:47.646 21:41:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:47.646 21:41:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:47.646 124.67 IOPS, 374.00 MiB/s [2024-12-10T21:41:48.429Z] [2024-12-10 21:41:48.409355] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:14:47.646 [2024-12-10 21:41:48.409734] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:14:48.213 [2024-12-10 21:41:48.740220] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:14:48.213 [2024-12-10 21:41:48.955473] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:14:48.780 21:41:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:48.780 21:41:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:48.780 21:41:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:48.780 21:41:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:48.780 21:41:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:48.780 21:41:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:48.780 21:41:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:48.780 21:41:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.780 21:41:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:48.780 21:41:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:48.780 111.14 IOPS, 333.43 MiB/s [2024-12-10T21:41:49.563Z] 21:41:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.780 21:41:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:48.780 "name": "raid_bdev1", 00:14:48.780 "uuid": "9654d6fd-2469-42fa-a976-b68bf2e18c0a", 00:14:48.780 "strip_size_kb": 0, 00:14:48.780 "state": "online", 00:14:48.780 "raid_level": "raid1", 00:14:48.780 "superblock": false, 00:14:48.780 "num_base_bdevs": 2, 00:14:48.780 "num_base_bdevs_discovered": 2, 00:14:48.780 "num_base_bdevs_operational": 2, 00:14:48.780 "process": { 00:14:48.780 "type": "rebuild", 00:14:48.780 "target": "spare", 00:14:48.780 "progress": { 00:14:48.780 "blocks": 57344, 00:14:48.780 "percent": 87 00:14:48.780 } 00:14:48.780 }, 00:14:48.780 "base_bdevs_list": [ 00:14:48.780 { 00:14:48.780 "name": "spare", 00:14:48.780 "uuid": "4187b752-0c46-53fe-bb9e-1e7dd1ee2c24", 00:14:48.780 "is_configured": true, 00:14:48.780 "data_offset": 0, 00:14:48.780 "data_size": 65536 00:14:48.780 }, 00:14:48.780 { 00:14:48.780 "name": "BaseBdev2", 00:14:48.780 "uuid": "9adf5e05-7c7a-537c-97dc-1444b3cbbf25", 00:14:48.780 "is_configured": true, 00:14:48.780 "data_offset": 0, 00:14:48.780 "data_size": 65536 00:14:48.780 } 00:14:48.780 ] 00:14:48.780 }' 00:14:48.780 21:41:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:48.780 [2024-12-10 21:41:49.394746] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:14:48.780 21:41:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:48.780 21:41:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:48.780 21:41:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:48.780 21:41:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:49.348 [2024-12-10 21:41:49.825380] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:49.348 [2024-12-10 21:41:49.930842] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:49.348 [2024-12-10 21:41:49.933670] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:49.866 102.12 IOPS, 306.38 MiB/s [2024-12-10T21:41:50.649Z] 21:41:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:49.866 21:41:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:49.866 21:41:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:49.866 21:41:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:49.866 21:41:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:49.866 21:41:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:49.866 21:41:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:49.866 21:41:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.866 21:41:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:49.866 21:41:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:49.866 21:41:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.866 21:41:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:49.866 "name": "raid_bdev1", 00:14:49.866 "uuid": "9654d6fd-2469-42fa-a976-b68bf2e18c0a", 00:14:49.866 "strip_size_kb": 0, 00:14:49.866 "state": "online", 00:14:49.866 "raid_level": "raid1", 00:14:49.866 "superblock": false, 00:14:49.866 "num_base_bdevs": 2, 00:14:49.866 "num_base_bdevs_discovered": 2, 00:14:49.866 "num_base_bdevs_operational": 2, 00:14:49.866 "base_bdevs_list": [ 00:14:49.866 { 00:14:49.866 "name": "spare", 00:14:49.866 "uuid": "4187b752-0c46-53fe-bb9e-1e7dd1ee2c24", 00:14:49.866 "is_configured": true, 00:14:49.866 "data_offset": 0, 00:14:49.866 "data_size": 65536 00:14:49.866 }, 00:14:49.866 { 00:14:49.866 "name": "BaseBdev2", 00:14:49.866 "uuid": "9adf5e05-7c7a-537c-97dc-1444b3cbbf25", 00:14:49.866 "is_configured": true, 00:14:49.866 "data_offset": 0, 00:14:49.866 "data_size": 65536 00:14:49.866 } 00:14:49.866 ] 00:14:49.866 }' 00:14:49.866 21:41:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:49.866 21:41:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:49.866 21:41:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:49.866 21:41:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:49.866 21:41:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:14:49.866 21:41:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:49.866 21:41:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:49.866 21:41:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:49.866 21:41:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:49.866 21:41:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:49.866 21:41:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:49.866 21:41:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:49.866 21:41:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.866 21:41:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:49.866 21:41:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.125 21:41:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:50.125 "name": "raid_bdev1", 00:14:50.125 "uuid": "9654d6fd-2469-42fa-a976-b68bf2e18c0a", 00:14:50.125 "strip_size_kb": 0, 00:14:50.125 "state": "online", 00:14:50.125 "raid_level": "raid1", 00:14:50.125 "superblock": false, 00:14:50.125 "num_base_bdevs": 2, 00:14:50.125 "num_base_bdevs_discovered": 2, 00:14:50.125 "num_base_bdevs_operational": 2, 00:14:50.125 "base_bdevs_list": [ 00:14:50.125 { 00:14:50.125 "name": "spare", 00:14:50.125 "uuid": "4187b752-0c46-53fe-bb9e-1e7dd1ee2c24", 00:14:50.125 "is_configured": true, 00:14:50.125 "data_offset": 0, 00:14:50.125 "data_size": 65536 00:14:50.125 }, 00:14:50.125 { 00:14:50.125 "name": "BaseBdev2", 00:14:50.125 "uuid": "9adf5e05-7c7a-537c-97dc-1444b3cbbf25", 00:14:50.125 "is_configured": true, 00:14:50.125 "data_offset": 0, 00:14:50.125 "data_size": 65536 00:14:50.125 } 00:14:50.125 ] 00:14:50.125 }' 00:14:50.125 21:41:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:50.125 21:41:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:50.125 21:41:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:50.125 21:41:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:50.125 21:41:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:50.125 21:41:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:50.125 21:41:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:50.125 21:41:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:50.125 21:41:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:50.125 21:41:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:50.125 21:41:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:50.125 21:41:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:50.125 21:41:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:50.125 21:41:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:50.125 21:41:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:50.125 21:41:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:50.125 21:41:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.125 21:41:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:50.125 21:41:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.125 21:41:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:50.125 "name": "raid_bdev1", 00:14:50.125 "uuid": "9654d6fd-2469-42fa-a976-b68bf2e18c0a", 00:14:50.125 "strip_size_kb": 0, 00:14:50.125 "state": "online", 00:14:50.125 "raid_level": "raid1", 00:14:50.125 "superblock": false, 00:14:50.125 "num_base_bdevs": 2, 00:14:50.125 "num_base_bdevs_discovered": 2, 00:14:50.125 "num_base_bdevs_operational": 2, 00:14:50.125 "base_bdevs_list": [ 00:14:50.125 { 00:14:50.125 "name": "spare", 00:14:50.125 "uuid": "4187b752-0c46-53fe-bb9e-1e7dd1ee2c24", 00:14:50.125 "is_configured": true, 00:14:50.125 "data_offset": 0, 00:14:50.125 "data_size": 65536 00:14:50.125 }, 00:14:50.125 { 00:14:50.125 "name": "BaseBdev2", 00:14:50.125 "uuid": "9adf5e05-7c7a-537c-97dc-1444b3cbbf25", 00:14:50.125 "is_configured": true, 00:14:50.125 "data_offset": 0, 00:14:50.125 "data_size": 65536 00:14:50.125 } 00:14:50.125 ] 00:14:50.125 }' 00:14:50.125 21:41:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:50.125 21:41:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:50.694 21:41:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:50.694 21:41:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.694 21:41:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:50.694 [2024-12-10 21:41:51.218661] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:50.694 [2024-12-10 21:41:51.218693] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:50.694 00:14:50.694 Latency(us) 00:14:50.694 [2024-12-10T21:41:51.477Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:50.694 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:14:50.694 raid_bdev1 : 8.96 94.79 284.37 0.00 0.00 13456.40 316.59 108520.75 00:14:50.694 [2024-12-10T21:41:51.477Z] =================================================================================================================== 00:14:50.694 [2024-12-10T21:41:51.477Z] Total : 94.79 284.37 0.00 0.00 13456.40 316.59 108520.75 00:14:50.694 [2024-12-10 21:41:51.258242] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:50.694 [2024-12-10 21:41:51.258318] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:50.694 [2024-12-10 21:41:51.258400] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:50.694 [2024-12-10 21:41:51.258411] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:50.694 { 00:14:50.694 "results": [ 00:14:50.694 { 00:14:50.694 "job": "raid_bdev1", 00:14:50.694 "core_mask": "0x1", 00:14:50.694 "workload": "randrw", 00:14:50.694 "percentage": 50, 00:14:50.694 "status": "finished", 00:14:50.694 "queue_depth": 2, 00:14:50.694 "io_size": 3145728, 00:14:50.694 "runtime": 8.956742, 00:14:50.694 "iops": 94.78893106444285, 00:14:50.694 "mibps": 284.36679319332853, 00:14:50.694 "io_failed": 0, 00:14:50.694 "io_timeout": 0, 00:14:50.694 "avg_latency_us": 13456.401059556323, 00:14:50.694 "min_latency_us": 316.5903930131004, 00:14:50.694 "max_latency_us": 108520.74759825328 00:14:50.694 } 00:14:50.694 ], 00:14:50.694 "core_count": 1 00:14:50.694 } 00:14:50.694 21:41:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.694 21:41:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:50.694 21:41:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:14:50.694 21:41:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.694 21:41:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:50.694 21:41:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.694 21:41:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:50.694 21:41:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:50.694 21:41:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:14:50.694 21:41:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:14:50.694 21:41:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:50.694 21:41:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:14:50.694 21:41:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:50.694 21:41:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:50.694 21:41:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:50.694 21:41:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:14:50.694 21:41:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:50.694 21:41:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:50.694 21:41:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:14:50.954 /dev/nbd0 00:14:50.954 21:41:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:50.954 21:41:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:50.954 21:41:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:50.954 21:41:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:14:50.954 21:41:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:50.954 21:41:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:50.954 21:41:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:50.954 21:41:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:14:50.954 21:41:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:50.954 21:41:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:50.955 21:41:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:50.955 1+0 records in 00:14:50.955 1+0 records out 00:14:50.955 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000252338 s, 16.2 MB/s 00:14:50.955 21:41:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:50.955 21:41:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:14:50.955 21:41:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:50.955 21:41:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:50.955 21:41:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:14:50.955 21:41:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:50.955 21:41:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:50.955 21:41:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:50.955 21:41:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:14:50.955 21:41:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:14:50.955 21:41:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:50.955 21:41:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:14:50.955 21:41:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:50.955 21:41:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:14:50.955 21:41:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:50.955 21:41:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:14:50.955 21:41:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:50.955 21:41:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:50.955 21:41:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:14:51.215 /dev/nbd1 00:14:51.215 21:41:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:51.215 21:41:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:51.215 21:41:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:14:51.215 21:41:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:14:51.215 21:41:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:51.215 21:41:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:51.215 21:41:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:14:51.215 21:41:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:14:51.215 21:41:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:51.215 21:41:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:51.215 21:41:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:51.215 1+0 records in 00:14:51.215 1+0 records out 00:14:51.215 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000422671 s, 9.7 MB/s 00:14:51.215 21:41:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:51.215 21:41:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:14:51.215 21:41:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:51.215 21:41:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:51.215 21:41:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:14:51.215 21:41:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:51.215 21:41:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:51.215 21:41:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:14:51.474 21:41:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:14:51.474 21:41:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:51.474 21:41:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:14:51.474 21:41:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:51.474 21:41:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:14:51.474 21:41:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:51.474 21:41:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:51.474 21:41:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:51.474 21:41:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:51.474 21:41:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:51.474 21:41:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:51.474 21:41:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:51.474 21:41:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:51.474 21:41:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:14:51.474 21:41:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:51.474 21:41:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:51.474 21:41:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:51.474 21:41:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:51.474 21:41:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:51.474 21:41:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:14:51.474 21:41:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:51.474 21:41:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:51.733 21:41:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:51.733 21:41:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:51.733 21:41:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:51.733 21:41:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:51.733 21:41:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:51.733 21:41:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:51.733 21:41:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:14:51.733 21:41:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:51.733 21:41:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:14:51.733 21:41:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 76628 00:14:51.733 21:41:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # '[' -z 76628 ']' 00:14:51.733 21:41:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # kill -0 76628 00:14:51.733 21:41:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # uname 00:14:51.733 21:41:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:51.733 21:41:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76628 00:14:51.992 21:41:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:51.992 21:41:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:51.992 killing process with pid 76628 00:14:51.992 Received shutdown signal, test time was about 10.261107 seconds 00:14:51.992 00:14:51.992 Latency(us) 00:14:51.992 [2024-12-10T21:41:52.775Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:51.992 [2024-12-10T21:41:52.775Z] =================================================================================================================== 00:14:51.992 [2024-12-10T21:41:52.775Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:51.992 21:41:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76628' 00:14:51.992 21:41:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@973 -- # kill 76628 00:14:51.992 [2024-12-10 21:41:52.536407] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:51.992 21:41:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@978 -- # wait 76628 00:14:52.251 [2024-12-10 21:41:52.778107] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:53.278 21:41:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:14:53.278 00:14:53.278 real 0m13.512s 00:14:53.278 user 0m16.939s 00:14:53.278 sys 0m1.504s 00:14:53.278 21:41:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:53.278 21:41:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:53.278 ************************************ 00:14:53.278 END TEST raid_rebuild_test_io 00:14:53.278 ************************************ 00:14:53.278 21:41:54 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 2 true true true 00:14:53.278 21:41:54 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:14:53.278 21:41:54 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:53.278 21:41:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:53.537 ************************************ 00:14:53.537 START TEST raid_rebuild_test_sb_io 00:14:53.537 ************************************ 00:14:53.537 21:41:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true true true 00:14:53.537 21:41:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:14:53.537 21:41:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:14:53.537 21:41:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:14:53.537 21:41:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:14:53.537 21:41:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:53.537 21:41:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:53.537 21:41:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:53.537 21:41:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:53.537 21:41:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:53.537 21:41:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:53.537 21:41:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:53.537 21:41:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:53.537 21:41:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:53.537 21:41:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:14:53.537 21:41:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:53.537 21:41:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:53.537 21:41:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:53.537 21:41:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:53.537 21:41:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:53.537 21:41:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:53.538 21:41:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:14:53.538 21:41:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:14:53.538 21:41:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:14:53.538 21:41:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:14:53.538 21:41:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=77025 00:14:53.538 21:41:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:53.538 21:41:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 77025 00:14:53.538 21:41:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # '[' -z 77025 ']' 00:14:53.538 21:41:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:53.538 21:41:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:53.538 21:41:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:53.538 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:53.538 21:41:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:53.538 21:41:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:53.538 [2024-12-10 21:41:54.165352] Starting SPDK v25.01-pre git sha1 cec5ba284 / DPDK 24.03.0 initialization... 00:14:53.538 [2024-12-10 21:41:54.165576] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77025 ] 00:14:53.538 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:53.538 Zero copy mechanism will not be used. 00:14:53.797 [2024-12-10 21:41:54.340220] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:53.797 [2024-12-10 21:41:54.459238] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:14:54.056 [2024-12-10 21:41:54.668827] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:54.056 [2024-12-10 21:41:54.668982] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:54.315 21:41:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:54.315 21:41:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # return 0 00:14:54.315 21:41:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:54.315 21:41:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:54.315 21:41:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.315 21:41:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:54.315 BaseBdev1_malloc 00:14:54.315 21:41:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.315 21:41:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:54.315 21:41:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.315 21:41:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:54.315 [2024-12-10 21:41:55.072982] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:54.315 [2024-12-10 21:41:55.073123] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:54.315 [2024-12-10 21:41:55.073162] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:54.315 [2024-12-10 21:41:55.073196] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:54.315 [2024-12-10 21:41:55.075468] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:54.315 [2024-12-10 21:41:55.075548] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:54.315 BaseBdev1 00:14:54.315 21:41:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.315 21:41:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:54.315 21:41:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:54.315 21:41:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.315 21:41:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:54.574 BaseBdev2_malloc 00:14:54.574 21:41:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.574 21:41:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:54.574 21:41:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.574 21:41:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:54.574 [2024-12-10 21:41:55.127038] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:54.574 [2024-12-10 21:41:55.127098] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:54.574 [2024-12-10 21:41:55.127116] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:54.574 [2024-12-10 21:41:55.127127] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:54.574 [2024-12-10 21:41:55.129271] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:54.574 [2024-12-10 21:41:55.129375] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:54.574 BaseBdev2 00:14:54.574 21:41:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.574 21:41:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:14:54.574 21:41:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.574 21:41:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:54.574 spare_malloc 00:14:54.574 21:41:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.574 21:41:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:54.574 21:41:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.574 21:41:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:54.574 spare_delay 00:14:54.574 21:41:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.574 21:41:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:54.574 21:41:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.574 21:41:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:54.574 [2024-12-10 21:41:55.201507] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:54.574 [2024-12-10 21:41:55.201565] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:54.574 [2024-12-10 21:41:55.201601] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:14:54.574 [2024-12-10 21:41:55.201612] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:54.574 [2024-12-10 21:41:55.203669] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:54.574 [2024-12-10 21:41:55.203770] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:54.574 spare 00:14:54.574 21:41:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.574 21:41:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:14:54.574 21:41:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.574 21:41:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:54.574 [2024-12-10 21:41:55.213541] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:54.574 [2024-12-10 21:41:55.215211] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:54.574 [2024-12-10 21:41:55.215361] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:54.574 [2024-12-10 21:41:55.215376] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:54.574 [2024-12-10 21:41:55.215707] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:14:54.574 [2024-12-10 21:41:55.215981] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:54.574 [2024-12-10 21:41:55.216036] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:54.574 [2024-12-10 21:41:55.216269] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:54.574 21:41:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.574 21:41:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:54.574 21:41:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:54.574 21:41:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:54.574 21:41:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:54.574 21:41:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:54.574 21:41:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:54.574 21:41:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:54.574 21:41:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:54.574 21:41:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:54.574 21:41:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:54.574 21:41:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:54.574 21:41:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:54.574 21:41:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.574 21:41:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:54.574 21:41:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.575 21:41:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:54.575 "name": "raid_bdev1", 00:14:54.575 "uuid": "bda4e49e-a071-45e9-b41d-7ccac321e1ef", 00:14:54.575 "strip_size_kb": 0, 00:14:54.575 "state": "online", 00:14:54.575 "raid_level": "raid1", 00:14:54.575 "superblock": true, 00:14:54.575 "num_base_bdevs": 2, 00:14:54.575 "num_base_bdevs_discovered": 2, 00:14:54.575 "num_base_bdevs_operational": 2, 00:14:54.575 "base_bdevs_list": [ 00:14:54.575 { 00:14:54.575 "name": "BaseBdev1", 00:14:54.575 "uuid": "b8827a50-9ec8-56fc-a996-151ec956abd8", 00:14:54.575 "is_configured": true, 00:14:54.575 "data_offset": 2048, 00:14:54.575 "data_size": 63488 00:14:54.575 }, 00:14:54.575 { 00:14:54.575 "name": "BaseBdev2", 00:14:54.575 "uuid": "ebbca6dc-0c3d-574b-80ea-d86d63c8001b", 00:14:54.575 "is_configured": true, 00:14:54.575 "data_offset": 2048, 00:14:54.575 "data_size": 63488 00:14:54.575 } 00:14:54.575 ] 00:14:54.575 }' 00:14:54.575 21:41:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:54.575 21:41:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:55.143 21:41:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:55.143 21:41:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:55.143 21:41:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.143 21:41:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:55.143 [2024-12-10 21:41:55.649180] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:55.143 21:41:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.143 21:41:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:14:55.143 21:41:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:55.143 21:41:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.143 21:41:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:55.143 21:41:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:55.143 21:41:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.143 21:41:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:14:55.143 21:41:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:14:55.143 21:41:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:55.143 21:41:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:14:55.143 21:41:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.144 21:41:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:55.144 [2024-12-10 21:41:55.748605] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:55.144 21:41:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.144 21:41:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:55.144 21:41:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:55.144 21:41:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:55.144 21:41:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:55.144 21:41:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:55.144 21:41:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:55.144 21:41:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:55.144 21:41:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:55.144 21:41:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:55.144 21:41:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:55.144 21:41:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:55.144 21:41:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.144 21:41:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:55.144 21:41:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:55.144 21:41:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.144 21:41:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:55.144 "name": "raid_bdev1", 00:14:55.144 "uuid": "bda4e49e-a071-45e9-b41d-7ccac321e1ef", 00:14:55.144 "strip_size_kb": 0, 00:14:55.144 "state": "online", 00:14:55.144 "raid_level": "raid1", 00:14:55.144 "superblock": true, 00:14:55.144 "num_base_bdevs": 2, 00:14:55.144 "num_base_bdevs_discovered": 1, 00:14:55.144 "num_base_bdevs_operational": 1, 00:14:55.144 "base_bdevs_list": [ 00:14:55.144 { 00:14:55.144 "name": null, 00:14:55.144 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:55.144 "is_configured": false, 00:14:55.144 "data_offset": 0, 00:14:55.144 "data_size": 63488 00:14:55.144 }, 00:14:55.144 { 00:14:55.144 "name": "BaseBdev2", 00:14:55.144 "uuid": "ebbca6dc-0c3d-574b-80ea-d86d63c8001b", 00:14:55.144 "is_configured": true, 00:14:55.144 "data_offset": 2048, 00:14:55.144 "data_size": 63488 00:14:55.144 } 00:14:55.144 ] 00:14:55.144 }' 00:14:55.144 21:41:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:55.144 21:41:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:55.144 [2024-12-10 21:41:55.851913] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:14:55.144 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:55.144 Zero copy mechanism will not be used. 00:14:55.144 Running I/O for 60 seconds... 00:14:55.403 21:41:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:55.403 21:41:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.403 21:41:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:55.403 [2024-12-10 21:41:56.175953] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:55.662 21:41:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.662 21:41:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:55.662 [2024-12-10 21:41:56.234778] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:14:55.662 [2024-12-10 21:41:56.236784] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:55.662 [2024-12-10 21:41:56.355554] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:55.662 [2024-12-10 21:41:56.356245] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:55.921 [2024-12-10 21:41:56.589450] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:55.921 [2024-12-10 21:41:56.589778] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:56.181 157.00 IOPS, 471.00 MiB/s [2024-12-10T21:41:56.964Z] [2024-12-10 21:41:56.922630] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:14:56.439 [2024-12-10 21:41:57.150029] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:56.439 21:41:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:56.439 21:41:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:56.439 21:41:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:56.698 21:41:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:56.698 21:41:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:56.698 21:41:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:56.698 21:41:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:56.698 21:41:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.698 21:41:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:56.698 21:41:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.698 21:41:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:56.698 "name": "raid_bdev1", 00:14:56.698 "uuid": "bda4e49e-a071-45e9-b41d-7ccac321e1ef", 00:14:56.698 "strip_size_kb": 0, 00:14:56.698 "state": "online", 00:14:56.698 "raid_level": "raid1", 00:14:56.698 "superblock": true, 00:14:56.698 "num_base_bdevs": 2, 00:14:56.698 "num_base_bdevs_discovered": 2, 00:14:56.698 "num_base_bdevs_operational": 2, 00:14:56.698 "process": { 00:14:56.698 "type": "rebuild", 00:14:56.698 "target": "spare", 00:14:56.698 "progress": { 00:14:56.698 "blocks": 10240, 00:14:56.698 "percent": 16 00:14:56.698 } 00:14:56.698 }, 00:14:56.698 "base_bdevs_list": [ 00:14:56.698 { 00:14:56.698 "name": "spare", 00:14:56.698 "uuid": "31a3f482-c2de-5468-896e-8bd1adb68244", 00:14:56.698 "is_configured": true, 00:14:56.698 "data_offset": 2048, 00:14:56.698 "data_size": 63488 00:14:56.698 }, 00:14:56.698 { 00:14:56.698 "name": "BaseBdev2", 00:14:56.698 "uuid": "ebbca6dc-0c3d-574b-80ea-d86d63c8001b", 00:14:56.698 "is_configured": true, 00:14:56.698 "data_offset": 2048, 00:14:56.698 "data_size": 63488 00:14:56.698 } 00:14:56.698 ] 00:14:56.698 }' 00:14:56.698 21:41:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:56.699 21:41:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:56.699 21:41:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:56.699 21:41:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:56.699 21:41:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:56.699 21:41:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.699 21:41:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:56.699 [2024-12-10 21:41:57.382395] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:56.699 [2024-12-10 21:41:57.389711] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:14:56.699 [2024-12-10 21:41:57.391189] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:56.699 [2024-12-10 21:41:57.399493] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:56.699 [2024-12-10 21:41:57.399531] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:56.699 [2024-12-10 21:41:57.399542] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:56.699 [2024-12-10 21:41:57.455810] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:14:56.699 21:41:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.699 21:41:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:56.699 21:41:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:56.699 21:41:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:56.699 21:41:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:56.699 21:41:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:56.699 21:41:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:56.699 21:41:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:56.699 21:41:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:56.699 21:41:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:56.699 21:41:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:56.699 21:41:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:56.699 21:41:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:56.699 21:41:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.699 21:41:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:56.958 21:41:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.958 21:41:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:56.958 "name": "raid_bdev1", 00:14:56.958 "uuid": "bda4e49e-a071-45e9-b41d-7ccac321e1ef", 00:14:56.958 "strip_size_kb": 0, 00:14:56.958 "state": "online", 00:14:56.958 "raid_level": "raid1", 00:14:56.958 "superblock": true, 00:14:56.958 "num_base_bdevs": 2, 00:14:56.958 "num_base_bdevs_discovered": 1, 00:14:56.958 "num_base_bdevs_operational": 1, 00:14:56.958 "base_bdevs_list": [ 00:14:56.958 { 00:14:56.958 "name": null, 00:14:56.958 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:56.958 "is_configured": false, 00:14:56.958 "data_offset": 0, 00:14:56.958 "data_size": 63488 00:14:56.958 }, 00:14:56.958 { 00:14:56.958 "name": "BaseBdev2", 00:14:56.958 "uuid": "ebbca6dc-0c3d-574b-80ea-d86d63c8001b", 00:14:56.958 "is_configured": true, 00:14:56.958 "data_offset": 2048, 00:14:56.958 "data_size": 63488 00:14:56.958 } 00:14:56.958 ] 00:14:56.958 }' 00:14:56.958 21:41:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:56.958 21:41:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:57.218 160.00 IOPS, 480.00 MiB/s [2024-12-10T21:41:58.001Z] 21:41:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:57.218 21:41:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:57.218 21:41:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:57.218 21:41:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:57.218 21:41:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:57.218 21:41:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:57.218 21:41:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:57.218 21:41:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.218 21:41:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:57.218 21:41:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.218 21:41:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:57.218 "name": "raid_bdev1", 00:14:57.218 "uuid": "bda4e49e-a071-45e9-b41d-7ccac321e1ef", 00:14:57.218 "strip_size_kb": 0, 00:14:57.218 "state": "online", 00:14:57.218 "raid_level": "raid1", 00:14:57.218 "superblock": true, 00:14:57.218 "num_base_bdevs": 2, 00:14:57.218 "num_base_bdevs_discovered": 1, 00:14:57.218 "num_base_bdevs_operational": 1, 00:14:57.218 "base_bdevs_list": [ 00:14:57.218 { 00:14:57.218 "name": null, 00:14:57.218 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:57.218 "is_configured": false, 00:14:57.218 "data_offset": 0, 00:14:57.218 "data_size": 63488 00:14:57.218 }, 00:14:57.218 { 00:14:57.218 "name": "BaseBdev2", 00:14:57.218 "uuid": "ebbca6dc-0c3d-574b-80ea-d86d63c8001b", 00:14:57.218 "is_configured": true, 00:14:57.218 "data_offset": 2048, 00:14:57.218 "data_size": 63488 00:14:57.218 } 00:14:57.218 ] 00:14:57.218 }' 00:14:57.218 21:41:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:57.478 21:41:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:57.478 21:41:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:57.478 21:41:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:57.478 21:41:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:57.478 21:41:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.478 21:41:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:57.478 [2024-12-10 21:41:58.105398] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:57.478 21:41:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.478 21:41:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:57.478 [2024-12-10 21:41:58.171003] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:14:57.478 [2024-12-10 21:41:58.172923] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:57.737 [2024-12-10 21:41:58.286537] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:57.738 [2024-12-10 21:41:58.287202] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:57.738 [2024-12-10 21:41:58.404071] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:57.738 [2024-12-10 21:41:58.404488] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:57.997 [2024-12-10 21:41:58.650935] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:14:57.997 [2024-12-10 21:41:58.761041] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:57.997 [2024-12-10 21:41:58.761498] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:58.257 169.33 IOPS, 508.00 MiB/s [2024-12-10T21:41:59.040Z] [2024-12-10 21:41:59.002585] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:14:58.516 21:41:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:58.516 21:41:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:58.516 21:41:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:58.516 21:41:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:58.516 21:41:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:58.516 21:41:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:58.516 21:41:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:58.516 21:41:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.516 21:41:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:58.516 21:41:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.516 21:41:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:58.516 "name": "raid_bdev1", 00:14:58.516 "uuid": "bda4e49e-a071-45e9-b41d-7ccac321e1ef", 00:14:58.516 "strip_size_kb": 0, 00:14:58.516 "state": "online", 00:14:58.516 "raid_level": "raid1", 00:14:58.516 "superblock": true, 00:14:58.516 "num_base_bdevs": 2, 00:14:58.516 "num_base_bdevs_discovered": 2, 00:14:58.516 "num_base_bdevs_operational": 2, 00:14:58.516 "process": { 00:14:58.516 "type": "rebuild", 00:14:58.516 "target": "spare", 00:14:58.516 "progress": { 00:14:58.516 "blocks": 16384, 00:14:58.516 "percent": 25 00:14:58.516 } 00:14:58.517 }, 00:14:58.517 "base_bdevs_list": [ 00:14:58.517 { 00:14:58.517 "name": "spare", 00:14:58.517 "uuid": "31a3f482-c2de-5468-896e-8bd1adb68244", 00:14:58.517 "is_configured": true, 00:14:58.517 "data_offset": 2048, 00:14:58.517 "data_size": 63488 00:14:58.517 }, 00:14:58.517 { 00:14:58.517 "name": "BaseBdev2", 00:14:58.517 "uuid": "ebbca6dc-0c3d-574b-80ea-d86d63c8001b", 00:14:58.517 "is_configured": true, 00:14:58.517 "data_offset": 2048, 00:14:58.517 "data_size": 63488 00:14:58.517 } 00:14:58.517 ] 00:14:58.517 }' 00:14:58.517 21:41:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:58.517 21:41:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:58.517 21:41:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:58.815 21:41:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:58.815 21:41:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:14:58.815 21:41:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:14:58.815 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:14:58.815 21:41:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:14:58.815 21:41:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:14:58.815 21:41:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:14:58.815 21:41:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=431 00:14:58.815 21:41:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:58.815 21:41:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:58.815 21:41:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:58.815 21:41:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:58.815 21:41:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:58.815 21:41:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:58.815 21:41:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:58.815 21:41:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:58.815 21:41:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.815 21:41:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:58.815 [2024-12-10 21:41:59.319726] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:14:58.815 21:41:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.815 21:41:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:58.815 "name": "raid_bdev1", 00:14:58.815 "uuid": "bda4e49e-a071-45e9-b41d-7ccac321e1ef", 00:14:58.815 "strip_size_kb": 0, 00:14:58.815 "state": "online", 00:14:58.815 "raid_level": "raid1", 00:14:58.815 "superblock": true, 00:14:58.815 "num_base_bdevs": 2, 00:14:58.815 "num_base_bdevs_discovered": 2, 00:14:58.815 "num_base_bdevs_operational": 2, 00:14:58.815 "process": { 00:14:58.815 "type": "rebuild", 00:14:58.815 "target": "spare", 00:14:58.815 "progress": { 00:14:58.815 "blocks": 18432, 00:14:58.815 "percent": 29 00:14:58.815 } 00:14:58.815 }, 00:14:58.815 "base_bdevs_list": [ 00:14:58.815 { 00:14:58.815 "name": "spare", 00:14:58.815 "uuid": "31a3f482-c2de-5468-896e-8bd1adb68244", 00:14:58.815 "is_configured": true, 00:14:58.815 "data_offset": 2048, 00:14:58.816 "data_size": 63488 00:14:58.816 }, 00:14:58.816 { 00:14:58.816 "name": "BaseBdev2", 00:14:58.816 "uuid": "ebbca6dc-0c3d-574b-80ea-d86d63c8001b", 00:14:58.816 "is_configured": true, 00:14:58.816 "data_offset": 2048, 00:14:58.816 "data_size": 63488 00:14:58.816 } 00:14:58.816 ] 00:14:58.816 }' 00:14:58.816 21:41:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:58.816 21:41:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:58.816 21:41:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:58.816 [2024-12-10 21:41:59.421927] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:14:58.816 [2024-12-10 21:41:59.422359] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:14:58.816 21:41:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:58.816 21:41:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:59.077 [2024-12-10 21:41:59.783216] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:14:59.077 [2024-12-10 21:41:59.783609] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:14:59.595 145.00 IOPS, 435.00 MiB/s [2024-12-10T21:42:00.378Z] [2024-12-10 21:42:00.143793] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:14:59.595 [2024-12-10 21:42:00.360538] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:14:59.595 [2024-12-10 21:42:00.360888] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:14:59.855 21:42:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:59.855 21:42:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:59.855 21:42:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:59.855 21:42:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:59.855 21:42:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:59.855 21:42:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:59.855 21:42:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:59.855 21:42:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:59.855 21:42:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.855 21:42:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:59.855 21:42:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.855 21:42:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:59.855 "name": "raid_bdev1", 00:14:59.855 "uuid": "bda4e49e-a071-45e9-b41d-7ccac321e1ef", 00:14:59.855 "strip_size_kb": 0, 00:14:59.855 "state": "online", 00:14:59.855 "raid_level": "raid1", 00:14:59.855 "superblock": true, 00:14:59.855 "num_base_bdevs": 2, 00:14:59.855 "num_base_bdevs_discovered": 2, 00:14:59.855 "num_base_bdevs_operational": 2, 00:14:59.855 "process": { 00:14:59.855 "type": "rebuild", 00:14:59.855 "target": "spare", 00:14:59.855 "progress": { 00:14:59.855 "blocks": 34816, 00:14:59.855 "percent": 54 00:14:59.855 } 00:14:59.855 }, 00:14:59.855 "base_bdevs_list": [ 00:14:59.855 { 00:14:59.855 "name": "spare", 00:14:59.855 "uuid": "31a3f482-c2de-5468-896e-8bd1adb68244", 00:14:59.855 "is_configured": true, 00:14:59.855 "data_offset": 2048, 00:14:59.855 "data_size": 63488 00:14:59.855 }, 00:14:59.855 { 00:14:59.855 "name": "BaseBdev2", 00:14:59.855 "uuid": "ebbca6dc-0c3d-574b-80ea-d86d63c8001b", 00:14:59.855 "is_configured": true, 00:14:59.855 "data_offset": 2048, 00:14:59.855 "data_size": 63488 00:14:59.855 } 00:14:59.855 ] 00:14:59.855 }' 00:14:59.855 21:42:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:59.855 21:42:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:59.855 21:42:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:59.855 21:42:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:59.855 21:42:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:00.372 125.00 IOPS, 375.00 MiB/s [2024-12-10T21:42:01.155Z] [2024-12-10 21:42:00.940732] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:15:00.630 [2024-12-10 21:42:01.158222] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:15:00.888 [2024-12-10 21:42:01.483732] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:15:00.888 [2024-12-10 21:42:01.593629] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:15:00.888 [2024-12-10 21:42:01.593869] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:15:00.888 21:42:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:00.888 21:42:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:00.888 21:42:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:00.888 21:42:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:00.888 21:42:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:00.888 21:42:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:00.888 21:42:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:00.888 21:42:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:00.888 21:42:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.888 21:42:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:00.888 21:42:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.146 21:42:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:01.146 "name": "raid_bdev1", 00:15:01.146 "uuid": "bda4e49e-a071-45e9-b41d-7ccac321e1ef", 00:15:01.146 "strip_size_kb": 0, 00:15:01.146 "state": "online", 00:15:01.146 "raid_level": "raid1", 00:15:01.146 "superblock": true, 00:15:01.146 "num_base_bdevs": 2, 00:15:01.146 "num_base_bdevs_discovered": 2, 00:15:01.146 "num_base_bdevs_operational": 2, 00:15:01.146 "process": { 00:15:01.146 "type": "rebuild", 00:15:01.146 "target": "spare", 00:15:01.146 "progress": { 00:15:01.146 "blocks": 53248, 00:15:01.146 "percent": 83 00:15:01.146 } 00:15:01.146 }, 00:15:01.146 "base_bdevs_list": [ 00:15:01.146 { 00:15:01.146 "name": "spare", 00:15:01.146 "uuid": "31a3f482-c2de-5468-896e-8bd1adb68244", 00:15:01.146 "is_configured": true, 00:15:01.146 "data_offset": 2048, 00:15:01.146 "data_size": 63488 00:15:01.146 }, 00:15:01.146 { 00:15:01.146 "name": "BaseBdev2", 00:15:01.146 "uuid": "ebbca6dc-0c3d-574b-80ea-d86d63c8001b", 00:15:01.146 "is_configured": true, 00:15:01.146 "data_offset": 2048, 00:15:01.146 "data_size": 63488 00:15:01.146 } 00:15:01.146 ] 00:15:01.146 }' 00:15:01.146 21:42:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:01.146 21:42:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:01.146 21:42:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:01.146 21:42:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:01.146 21:42:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:01.402 109.67 IOPS, 329.00 MiB/s [2024-12-10T21:42:02.185Z] [2024-12-10 21:42:02.171262] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:15:01.659 [2024-12-10 21:42:02.274587] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:15:01.659 [2024-12-10 21:42:02.276884] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:02.224 21:42:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:02.224 21:42:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:02.224 21:42:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:02.224 21:42:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:02.224 21:42:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:02.224 21:42:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:02.224 21:42:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:02.224 21:42:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:02.224 21:42:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.224 21:42:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:02.224 21:42:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.224 21:42:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:02.224 "name": "raid_bdev1", 00:15:02.224 "uuid": "bda4e49e-a071-45e9-b41d-7ccac321e1ef", 00:15:02.224 "strip_size_kb": 0, 00:15:02.224 "state": "online", 00:15:02.224 "raid_level": "raid1", 00:15:02.224 "superblock": true, 00:15:02.224 "num_base_bdevs": 2, 00:15:02.224 "num_base_bdevs_discovered": 2, 00:15:02.224 "num_base_bdevs_operational": 2, 00:15:02.224 "base_bdevs_list": [ 00:15:02.224 { 00:15:02.224 "name": "spare", 00:15:02.224 "uuid": "31a3f482-c2de-5468-896e-8bd1adb68244", 00:15:02.224 "is_configured": true, 00:15:02.224 "data_offset": 2048, 00:15:02.224 "data_size": 63488 00:15:02.224 }, 00:15:02.224 { 00:15:02.224 "name": "BaseBdev2", 00:15:02.224 "uuid": "ebbca6dc-0c3d-574b-80ea-d86d63c8001b", 00:15:02.224 "is_configured": true, 00:15:02.224 "data_offset": 2048, 00:15:02.224 "data_size": 63488 00:15:02.224 } 00:15:02.224 ] 00:15:02.224 }' 00:15:02.224 21:42:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:02.224 98.43 IOPS, 295.29 MiB/s [2024-12-10T21:42:03.007Z] 21:42:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:15:02.224 21:42:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:02.224 21:42:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:15:02.224 21:42:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:15:02.224 21:42:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:02.224 21:42:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:02.224 21:42:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:02.224 21:42:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:02.224 21:42:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:02.224 21:42:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:02.224 21:42:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.224 21:42:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:02.224 21:42:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:02.224 21:42:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.224 21:42:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:02.224 "name": "raid_bdev1", 00:15:02.224 "uuid": "bda4e49e-a071-45e9-b41d-7ccac321e1ef", 00:15:02.224 "strip_size_kb": 0, 00:15:02.224 "state": "online", 00:15:02.224 "raid_level": "raid1", 00:15:02.224 "superblock": true, 00:15:02.224 "num_base_bdevs": 2, 00:15:02.224 "num_base_bdevs_discovered": 2, 00:15:02.224 "num_base_bdevs_operational": 2, 00:15:02.224 "base_bdevs_list": [ 00:15:02.224 { 00:15:02.224 "name": "spare", 00:15:02.224 "uuid": "31a3f482-c2de-5468-896e-8bd1adb68244", 00:15:02.224 "is_configured": true, 00:15:02.224 "data_offset": 2048, 00:15:02.224 "data_size": 63488 00:15:02.224 }, 00:15:02.224 { 00:15:02.224 "name": "BaseBdev2", 00:15:02.224 "uuid": "ebbca6dc-0c3d-574b-80ea-d86d63c8001b", 00:15:02.224 "is_configured": true, 00:15:02.224 "data_offset": 2048, 00:15:02.224 "data_size": 63488 00:15:02.224 } 00:15:02.224 ] 00:15:02.224 }' 00:15:02.224 21:42:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:02.482 21:42:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:02.482 21:42:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:02.482 21:42:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:02.482 21:42:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:02.482 21:42:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:02.482 21:42:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:02.482 21:42:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:02.482 21:42:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:02.482 21:42:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:02.482 21:42:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:02.482 21:42:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:02.482 21:42:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:02.482 21:42:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:02.482 21:42:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:02.482 21:42:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:02.482 21:42:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.482 21:42:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:02.482 21:42:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.482 21:42:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:02.482 "name": "raid_bdev1", 00:15:02.482 "uuid": "bda4e49e-a071-45e9-b41d-7ccac321e1ef", 00:15:02.482 "strip_size_kb": 0, 00:15:02.482 "state": "online", 00:15:02.482 "raid_level": "raid1", 00:15:02.482 "superblock": true, 00:15:02.482 "num_base_bdevs": 2, 00:15:02.482 "num_base_bdevs_discovered": 2, 00:15:02.482 "num_base_bdevs_operational": 2, 00:15:02.482 "base_bdevs_list": [ 00:15:02.482 { 00:15:02.482 "name": "spare", 00:15:02.482 "uuid": "31a3f482-c2de-5468-896e-8bd1adb68244", 00:15:02.482 "is_configured": true, 00:15:02.482 "data_offset": 2048, 00:15:02.482 "data_size": 63488 00:15:02.482 }, 00:15:02.482 { 00:15:02.482 "name": "BaseBdev2", 00:15:02.482 "uuid": "ebbca6dc-0c3d-574b-80ea-d86d63c8001b", 00:15:02.482 "is_configured": true, 00:15:02.482 "data_offset": 2048, 00:15:02.482 "data_size": 63488 00:15:02.482 } 00:15:02.482 ] 00:15:02.482 }' 00:15:02.482 21:42:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:02.482 21:42:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:02.740 21:42:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:02.740 21:42:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.740 21:42:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:02.740 [2024-12-10 21:42:03.475172] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:02.740 [2024-12-10 21:42:03.475263] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:02.998 00:15:02.998 Latency(us) 00:15:02.998 [2024-12-10T21:42:03.781Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:02.998 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:15:02.998 raid_bdev1 : 7.72 92.18 276.55 0.00 0.00 14824.26 309.44 110810.21 00:15:02.998 [2024-12-10T21:42:03.781Z] =================================================================================================================== 00:15:02.998 [2024-12-10T21:42:03.781Z] Total : 92.18 276.55 0.00 0.00 14824.26 309.44 110810.21 00:15:02.998 [2024-12-10 21:42:03.586672] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:02.998 [2024-12-10 21:42:03.586791] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:02.998 [2024-12-10 21:42:03.586890] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:02.998 [2024-12-10 21:42:03.586955] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:02.998 { 00:15:02.998 "results": [ 00:15:02.998 { 00:15:02.998 "job": "raid_bdev1", 00:15:02.998 "core_mask": "0x1", 00:15:02.998 "workload": "randrw", 00:15:02.998 "percentage": 50, 00:15:02.998 "status": "finished", 00:15:02.998 "queue_depth": 2, 00:15:02.998 "io_size": 3145728, 00:15:02.998 "runtime": 7.723752, 00:15:02.998 "iops": 92.18317729517986, 00:15:02.998 "mibps": 276.5495318855396, 00:15:02.998 "io_failed": 0, 00:15:02.998 "io_timeout": 0, 00:15:02.998 "avg_latency_us": 14824.256474167116, 00:15:02.998 "min_latency_us": 309.435807860262, 00:15:02.998 "max_latency_us": 110810.21484716157 00:15:02.998 } 00:15:02.998 ], 00:15:02.998 "core_count": 1 00:15:02.998 } 00:15:02.998 21:42:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.998 21:42:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:02.998 21:42:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.998 21:42:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:02.998 21:42:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:15:02.998 21:42:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.998 21:42:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:15:02.998 21:42:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:15:02.998 21:42:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:15:02.998 21:42:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:15:02.998 21:42:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:02.998 21:42:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:15:02.998 21:42:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:02.998 21:42:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:02.998 21:42:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:02.998 21:42:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:15:02.998 21:42:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:02.998 21:42:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:02.998 21:42:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:15:03.257 /dev/nbd0 00:15:03.257 21:42:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:03.257 21:42:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:03.257 21:42:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:03.257 21:42:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:15:03.257 21:42:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:03.257 21:42:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:03.257 21:42:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:03.257 21:42:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:15:03.257 21:42:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:03.257 21:42:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:03.257 21:42:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:03.257 1+0 records in 00:15:03.257 1+0 records out 00:15:03.257 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000335129 s, 12.2 MB/s 00:15:03.257 21:42:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:03.257 21:42:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:15:03.257 21:42:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:03.257 21:42:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:03.257 21:42:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:15:03.257 21:42:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:03.257 21:42:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:03.257 21:42:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:15:03.257 21:42:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:15:03.257 21:42:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:15:03.257 21:42:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:03.257 21:42:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:15:03.257 21:42:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:03.257 21:42:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:15:03.257 21:42:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:03.257 21:42:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:15:03.257 21:42:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:03.257 21:42:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:03.257 21:42:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:15:03.580 /dev/nbd1 00:15:03.580 21:42:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:03.580 21:42:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:03.580 21:42:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:15:03.580 21:42:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:15:03.580 21:42:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:03.580 21:42:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:03.580 21:42:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:15:03.580 21:42:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:15:03.580 21:42:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:03.580 21:42:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:03.580 21:42:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:03.580 1+0 records in 00:15:03.580 1+0 records out 00:15:03.580 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000226524 s, 18.1 MB/s 00:15:03.580 21:42:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:03.580 21:42:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:15:03.580 21:42:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:03.581 21:42:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:03.581 21:42:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:15:03.581 21:42:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:03.581 21:42:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:03.581 21:42:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:15:03.581 21:42:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:15:03.581 21:42:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:03.581 21:42:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:15:03.581 21:42:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:03.581 21:42:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:15:03.581 21:42:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:03.581 21:42:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:03.839 21:42:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:03.839 21:42:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:03.839 21:42:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:03.839 21:42:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:03.839 21:42:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:03.839 21:42:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:03.839 21:42:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:15:03.839 21:42:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:15:03.839 21:42:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:03.839 21:42:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:03.839 21:42:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:03.839 21:42:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:03.839 21:42:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:15:03.839 21:42:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:03.839 21:42:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:04.098 21:42:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:04.098 21:42:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:04.098 21:42:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:04.098 21:42:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:04.098 21:42:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:04.098 21:42:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:04.098 21:42:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:15:04.098 21:42:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:15:04.098 21:42:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:15:04.098 21:42:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:15:04.098 21:42:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.099 21:42:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:04.099 21:42:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.099 21:42:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:04.099 21:42:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.099 21:42:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:04.099 [2024-12-10 21:42:04.824837] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:04.099 [2024-12-10 21:42:04.824920] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:04.099 [2024-12-10 21:42:04.824946] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:15:04.099 [2024-12-10 21:42:04.824958] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:04.099 [2024-12-10 21:42:04.827487] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:04.099 [2024-12-10 21:42:04.827589] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:04.099 [2024-12-10 21:42:04.827725] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:04.099 [2024-12-10 21:42:04.827785] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:04.099 [2024-12-10 21:42:04.828015] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:04.099 spare 00:15:04.099 21:42:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.099 21:42:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:15:04.099 21:42:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.099 21:42:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:04.358 [2024-12-10 21:42:04.927946] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:15:04.358 [2024-12-10 21:42:04.928012] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:04.358 [2024-12-10 21:42:04.928369] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b0d0 00:15:04.358 [2024-12-10 21:42:04.928598] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:15:04.358 [2024-12-10 21:42:04.928609] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:15:04.358 [2024-12-10 21:42:04.928836] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:04.358 21:42:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.358 21:42:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:04.358 21:42:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:04.358 21:42:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:04.358 21:42:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:04.358 21:42:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:04.358 21:42:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:04.358 21:42:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:04.358 21:42:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:04.358 21:42:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:04.358 21:42:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:04.358 21:42:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:04.358 21:42:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:04.358 21:42:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.358 21:42:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:04.358 21:42:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.358 21:42:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:04.358 "name": "raid_bdev1", 00:15:04.358 "uuid": "bda4e49e-a071-45e9-b41d-7ccac321e1ef", 00:15:04.358 "strip_size_kb": 0, 00:15:04.358 "state": "online", 00:15:04.358 "raid_level": "raid1", 00:15:04.358 "superblock": true, 00:15:04.358 "num_base_bdevs": 2, 00:15:04.358 "num_base_bdevs_discovered": 2, 00:15:04.358 "num_base_bdevs_operational": 2, 00:15:04.358 "base_bdevs_list": [ 00:15:04.358 { 00:15:04.358 "name": "spare", 00:15:04.358 "uuid": "31a3f482-c2de-5468-896e-8bd1adb68244", 00:15:04.358 "is_configured": true, 00:15:04.358 "data_offset": 2048, 00:15:04.358 "data_size": 63488 00:15:04.358 }, 00:15:04.358 { 00:15:04.358 "name": "BaseBdev2", 00:15:04.358 "uuid": "ebbca6dc-0c3d-574b-80ea-d86d63c8001b", 00:15:04.358 "is_configured": true, 00:15:04.358 "data_offset": 2048, 00:15:04.358 "data_size": 63488 00:15:04.358 } 00:15:04.358 ] 00:15:04.358 }' 00:15:04.358 21:42:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:04.358 21:42:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:04.617 21:42:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:04.617 21:42:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:04.617 21:42:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:04.617 21:42:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:04.617 21:42:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:04.617 21:42:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:04.617 21:42:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:04.617 21:42:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.617 21:42:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:04.875 21:42:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.875 21:42:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:04.875 "name": "raid_bdev1", 00:15:04.875 "uuid": "bda4e49e-a071-45e9-b41d-7ccac321e1ef", 00:15:04.875 "strip_size_kb": 0, 00:15:04.875 "state": "online", 00:15:04.875 "raid_level": "raid1", 00:15:04.875 "superblock": true, 00:15:04.875 "num_base_bdevs": 2, 00:15:04.875 "num_base_bdevs_discovered": 2, 00:15:04.875 "num_base_bdevs_operational": 2, 00:15:04.875 "base_bdevs_list": [ 00:15:04.875 { 00:15:04.875 "name": "spare", 00:15:04.875 "uuid": "31a3f482-c2de-5468-896e-8bd1adb68244", 00:15:04.875 "is_configured": true, 00:15:04.875 "data_offset": 2048, 00:15:04.875 "data_size": 63488 00:15:04.875 }, 00:15:04.875 { 00:15:04.875 "name": "BaseBdev2", 00:15:04.875 "uuid": "ebbca6dc-0c3d-574b-80ea-d86d63c8001b", 00:15:04.875 "is_configured": true, 00:15:04.875 "data_offset": 2048, 00:15:04.875 "data_size": 63488 00:15:04.875 } 00:15:04.875 ] 00:15:04.875 }' 00:15:04.875 21:42:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:04.875 21:42:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:04.876 21:42:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:04.876 21:42:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:04.876 21:42:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:04.876 21:42:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:15:04.876 21:42:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.876 21:42:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:04.876 21:42:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.876 21:42:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:15:04.876 21:42:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:04.876 21:42:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.876 21:42:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:04.876 [2024-12-10 21:42:05.595963] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:04.876 21:42:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.876 21:42:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:04.876 21:42:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:04.876 21:42:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:04.876 21:42:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:04.876 21:42:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:04.876 21:42:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:04.876 21:42:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:04.876 21:42:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:04.876 21:42:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:04.876 21:42:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:04.876 21:42:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:04.876 21:42:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:04.876 21:42:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.876 21:42:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:04.876 21:42:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.134 21:42:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:05.134 "name": "raid_bdev1", 00:15:05.134 "uuid": "bda4e49e-a071-45e9-b41d-7ccac321e1ef", 00:15:05.134 "strip_size_kb": 0, 00:15:05.134 "state": "online", 00:15:05.134 "raid_level": "raid1", 00:15:05.134 "superblock": true, 00:15:05.134 "num_base_bdevs": 2, 00:15:05.134 "num_base_bdevs_discovered": 1, 00:15:05.134 "num_base_bdevs_operational": 1, 00:15:05.134 "base_bdevs_list": [ 00:15:05.134 { 00:15:05.134 "name": null, 00:15:05.134 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:05.134 "is_configured": false, 00:15:05.134 "data_offset": 0, 00:15:05.134 "data_size": 63488 00:15:05.134 }, 00:15:05.134 { 00:15:05.134 "name": "BaseBdev2", 00:15:05.134 "uuid": "ebbca6dc-0c3d-574b-80ea-d86d63c8001b", 00:15:05.134 "is_configured": true, 00:15:05.134 "data_offset": 2048, 00:15:05.134 "data_size": 63488 00:15:05.134 } 00:15:05.134 ] 00:15:05.134 }' 00:15:05.134 21:42:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:05.134 21:42:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:05.393 21:42:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:05.393 21:42:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.393 21:42:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:05.393 [2024-12-10 21:42:06.023360] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:05.393 [2024-12-10 21:42:06.023660] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:15:05.393 [2024-12-10 21:42:06.023732] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:05.393 [2024-12-10 21:42:06.023836] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:05.393 [2024-12-10 21:42:06.041567] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b1a0 00:15:05.393 21:42:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.393 21:42:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:15:05.393 [2024-12-10 21:42:06.043686] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:06.328 21:42:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:06.328 21:42:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:06.328 21:42:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:06.328 21:42:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:06.328 21:42:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:06.328 21:42:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:06.328 21:42:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:06.328 21:42:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.328 21:42:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:06.328 21:42:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.328 21:42:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:06.328 "name": "raid_bdev1", 00:15:06.328 "uuid": "bda4e49e-a071-45e9-b41d-7ccac321e1ef", 00:15:06.328 "strip_size_kb": 0, 00:15:06.328 "state": "online", 00:15:06.328 "raid_level": "raid1", 00:15:06.328 "superblock": true, 00:15:06.328 "num_base_bdevs": 2, 00:15:06.328 "num_base_bdevs_discovered": 2, 00:15:06.328 "num_base_bdevs_operational": 2, 00:15:06.328 "process": { 00:15:06.328 "type": "rebuild", 00:15:06.328 "target": "spare", 00:15:06.328 "progress": { 00:15:06.328 "blocks": 20480, 00:15:06.328 "percent": 32 00:15:06.328 } 00:15:06.328 }, 00:15:06.328 "base_bdevs_list": [ 00:15:06.328 { 00:15:06.328 "name": "spare", 00:15:06.328 "uuid": "31a3f482-c2de-5468-896e-8bd1adb68244", 00:15:06.328 "is_configured": true, 00:15:06.328 "data_offset": 2048, 00:15:06.328 "data_size": 63488 00:15:06.328 }, 00:15:06.328 { 00:15:06.328 "name": "BaseBdev2", 00:15:06.329 "uuid": "ebbca6dc-0c3d-574b-80ea-d86d63c8001b", 00:15:06.329 "is_configured": true, 00:15:06.329 "data_offset": 2048, 00:15:06.329 "data_size": 63488 00:15:06.329 } 00:15:06.329 ] 00:15:06.329 }' 00:15:06.329 21:42:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:06.586 21:42:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:06.586 21:42:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:06.586 21:42:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:06.586 21:42:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:15:06.586 21:42:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.586 21:42:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:06.586 [2024-12-10 21:42:07.199494] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:06.586 [2024-12-10 21:42:07.249780] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:06.586 [2024-12-10 21:42:07.249863] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:06.586 [2024-12-10 21:42:07.249882] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:06.586 [2024-12-10 21:42:07.249892] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:06.586 21:42:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.586 21:42:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:06.586 21:42:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:06.586 21:42:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:06.586 21:42:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:06.586 21:42:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:06.587 21:42:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:06.587 21:42:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:06.587 21:42:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:06.587 21:42:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:06.587 21:42:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:06.587 21:42:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:06.587 21:42:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:06.587 21:42:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.587 21:42:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:06.587 21:42:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.587 21:42:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:06.587 "name": "raid_bdev1", 00:15:06.587 "uuid": "bda4e49e-a071-45e9-b41d-7ccac321e1ef", 00:15:06.587 "strip_size_kb": 0, 00:15:06.587 "state": "online", 00:15:06.587 "raid_level": "raid1", 00:15:06.587 "superblock": true, 00:15:06.587 "num_base_bdevs": 2, 00:15:06.587 "num_base_bdevs_discovered": 1, 00:15:06.587 "num_base_bdevs_operational": 1, 00:15:06.587 "base_bdevs_list": [ 00:15:06.587 { 00:15:06.587 "name": null, 00:15:06.587 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:06.587 "is_configured": false, 00:15:06.587 "data_offset": 0, 00:15:06.587 "data_size": 63488 00:15:06.587 }, 00:15:06.587 { 00:15:06.587 "name": "BaseBdev2", 00:15:06.587 "uuid": "ebbca6dc-0c3d-574b-80ea-d86d63c8001b", 00:15:06.587 "is_configured": true, 00:15:06.587 "data_offset": 2048, 00:15:06.587 "data_size": 63488 00:15:06.587 } 00:15:06.587 ] 00:15:06.587 }' 00:15:06.587 21:42:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:06.587 21:42:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:07.155 21:42:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:07.155 21:42:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.155 21:42:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:07.155 [2024-12-10 21:42:07.732757] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:07.155 [2024-12-10 21:42:07.732914] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:07.155 [2024-12-10 21:42:07.732964] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:15:07.155 [2024-12-10 21:42:07.733003] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:07.155 [2024-12-10 21:42:07.733595] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:07.155 [2024-12-10 21:42:07.733671] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:07.155 [2024-12-10 21:42:07.733822] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:07.155 [2024-12-10 21:42:07.733872] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:15:07.155 [2024-12-10 21:42:07.733920] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:07.155 [2024-12-10 21:42:07.733994] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:07.155 [2024-12-10 21:42:07.752506] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b270 00:15:07.155 spare 00:15:07.155 21:42:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.155 21:42:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:15:07.155 [2024-12-10 21:42:07.754628] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:08.127 21:42:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:08.127 21:42:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:08.127 21:42:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:08.127 21:42:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:08.127 21:42:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:08.127 21:42:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:08.127 21:42:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:08.127 21:42:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.127 21:42:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:08.127 21:42:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.127 21:42:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:08.127 "name": "raid_bdev1", 00:15:08.127 "uuid": "bda4e49e-a071-45e9-b41d-7ccac321e1ef", 00:15:08.127 "strip_size_kb": 0, 00:15:08.127 "state": "online", 00:15:08.127 "raid_level": "raid1", 00:15:08.127 "superblock": true, 00:15:08.127 "num_base_bdevs": 2, 00:15:08.127 "num_base_bdevs_discovered": 2, 00:15:08.127 "num_base_bdevs_operational": 2, 00:15:08.127 "process": { 00:15:08.127 "type": "rebuild", 00:15:08.127 "target": "spare", 00:15:08.127 "progress": { 00:15:08.127 "blocks": 20480, 00:15:08.127 "percent": 32 00:15:08.127 } 00:15:08.127 }, 00:15:08.127 "base_bdevs_list": [ 00:15:08.127 { 00:15:08.127 "name": "spare", 00:15:08.127 "uuid": "31a3f482-c2de-5468-896e-8bd1adb68244", 00:15:08.127 "is_configured": true, 00:15:08.127 "data_offset": 2048, 00:15:08.127 "data_size": 63488 00:15:08.127 }, 00:15:08.127 { 00:15:08.127 "name": "BaseBdev2", 00:15:08.127 "uuid": "ebbca6dc-0c3d-574b-80ea-d86d63c8001b", 00:15:08.127 "is_configured": true, 00:15:08.127 "data_offset": 2048, 00:15:08.127 "data_size": 63488 00:15:08.127 } 00:15:08.127 ] 00:15:08.127 }' 00:15:08.127 21:42:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:08.127 21:42:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:08.127 21:42:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:08.127 21:42:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:08.127 21:42:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:15:08.127 21:42:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.127 21:42:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:08.127 [2024-12-10 21:42:08.890333] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:08.386 [2024-12-10 21:42:08.960616] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:08.386 [2024-12-10 21:42:08.960818] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:08.386 [2024-12-10 21:42:08.960868] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:08.386 [2024-12-10 21:42:08.960881] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:08.386 21:42:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.386 21:42:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:08.386 21:42:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:08.386 21:42:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:08.386 21:42:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:08.386 21:42:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:08.386 21:42:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:08.386 21:42:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:08.386 21:42:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:08.386 21:42:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:08.386 21:42:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:08.386 21:42:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:08.387 21:42:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.387 21:42:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:08.387 21:42:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:08.387 21:42:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.387 21:42:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:08.387 "name": "raid_bdev1", 00:15:08.387 "uuid": "bda4e49e-a071-45e9-b41d-7ccac321e1ef", 00:15:08.387 "strip_size_kb": 0, 00:15:08.387 "state": "online", 00:15:08.387 "raid_level": "raid1", 00:15:08.387 "superblock": true, 00:15:08.387 "num_base_bdevs": 2, 00:15:08.387 "num_base_bdevs_discovered": 1, 00:15:08.387 "num_base_bdevs_operational": 1, 00:15:08.387 "base_bdevs_list": [ 00:15:08.387 { 00:15:08.387 "name": null, 00:15:08.387 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:08.387 "is_configured": false, 00:15:08.387 "data_offset": 0, 00:15:08.387 "data_size": 63488 00:15:08.387 }, 00:15:08.387 { 00:15:08.387 "name": "BaseBdev2", 00:15:08.387 "uuid": "ebbca6dc-0c3d-574b-80ea-d86d63c8001b", 00:15:08.387 "is_configured": true, 00:15:08.387 "data_offset": 2048, 00:15:08.387 "data_size": 63488 00:15:08.387 } 00:15:08.387 ] 00:15:08.387 }' 00:15:08.387 21:42:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:08.387 21:42:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:08.956 21:42:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:08.956 21:42:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:08.956 21:42:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:08.956 21:42:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:08.956 21:42:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:08.956 21:42:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:08.956 21:42:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.956 21:42:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:08.956 21:42:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:08.956 21:42:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.956 21:42:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:08.956 "name": "raid_bdev1", 00:15:08.956 "uuid": "bda4e49e-a071-45e9-b41d-7ccac321e1ef", 00:15:08.956 "strip_size_kb": 0, 00:15:08.956 "state": "online", 00:15:08.956 "raid_level": "raid1", 00:15:08.956 "superblock": true, 00:15:08.956 "num_base_bdevs": 2, 00:15:08.956 "num_base_bdevs_discovered": 1, 00:15:08.956 "num_base_bdevs_operational": 1, 00:15:08.956 "base_bdevs_list": [ 00:15:08.956 { 00:15:08.956 "name": null, 00:15:08.956 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:08.956 "is_configured": false, 00:15:08.956 "data_offset": 0, 00:15:08.956 "data_size": 63488 00:15:08.956 }, 00:15:08.956 { 00:15:08.956 "name": "BaseBdev2", 00:15:08.956 "uuid": "ebbca6dc-0c3d-574b-80ea-d86d63c8001b", 00:15:08.956 "is_configured": true, 00:15:08.956 "data_offset": 2048, 00:15:08.956 "data_size": 63488 00:15:08.956 } 00:15:08.956 ] 00:15:08.956 }' 00:15:08.956 21:42:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:08.956 21:42:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:08.956 21:42:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:08.956 21:42:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:08.956 21:42:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:15:08.956 21:42:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.956 21:42:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:08.956 21:42:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.956 21:42:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:08.956 21:42:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.956 21:42:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:08.956 [2024-12-10 21:42:09.603132] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:08.956 [2024-12-10 21:42:09.603201] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:08.956 [2024-12-10 21:42:09.603226] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:15:08.956 [2024-12-10 21:42:09.603235] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:08.956 [2024-12-10 21:42:09.603725] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:08.956 [2024-12-10 21:42:09.603754] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:08.956 [2024-12-10 21:42:09.603867] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:15:08.956 [2024-12-10 21:42:09.603882] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:15:08.956 [2024-12-10 21:42:09.603896] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:08.956 [2024-12-10 21:42:09.603907] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:15:08.956 BaseBdev1 00:15:08.956 21:42:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.956 21:42:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:15:09.893 21:42:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:09.893 21:42:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:09.893 21:42:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:09.893 21:42:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:09.893 21:42:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:09.893 21:42:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:09.893 21:42:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:09.893 21:42:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:09.893 21:42:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:09.893 21:42:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:09.893 21:42:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:09.893 21:42:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.893 21:42:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:09.893 21:42:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:09.893 21:42:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.153 21:42:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:10.153 "name": "raid_bdev1", 00:15:10.153 "uuid": "bda4e49e-a071-45e9-b41d-7ccac321e1ef", 00:15:10.153 "strip_size_kb": 0, 00:15:10.153 "state": "online", 00:15:10.153 "raid_level": "raid1", 00:15:10.153 "superblock": true, 00:15:10.153 "num_base_bdevs": 2, 00:15:10.153 "num_base_bdevs_discovered": 1, 00:15:10.153 "num_base_bdevs_operational": 1, 00:15:10.153 "base_bdevs_list": [ 00:15:10.153 { 00:15:10.153 "name": null, 00:15:10.153 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:10.153 "is_configured": false, 00:15:10.153 "data_offset": 0, 00:15:10.153 "data_size": 63488 00:15:10.153 }, 00:15:10.153 { 00:15:10.153 "name": "BaseBdev2", 00:15:10.153 "uuid": "ebbca6dc-0c3d-574b-80ea-d86d63c8001b", 00:15:10.153 "is_configured": true, 00:15:10.153 "data_offset": 2048, 00:15:10.153 "data_size": 63488 00:15:10.153 } 00:15:10.153 ] 00:15:10.153 }' 00:15:10.153 21:42:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:10.153 21:42:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:10.412 21:42:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:10.412 21:42:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:10.412 21:42:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:10.412 21:42:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:10.412 21:42:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:10.412 21:42:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:10.412 21:42:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:10.412 21:42:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.412 21:42:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:10.412 21:42:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.412 21:42:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:10.412 "name": "raid_bdev1", 00:15:10.412 "uuid": "bda4e49e-a071-45e9-b41d-7ccac321e1ef", 00:15:10.412 "strip_size_kb": 0, 00:15:10.412 "state": "online", 00:15:10.412 "raid_level": "raid1", 00:15:10.412 "superblock": true, 00:15:10.412 "num_base_bdevs": 2, 00:15:10.412 "num_base_bdevs_discovered": 1, 00:15:10.412 "num_base_bdevs_operational": 1, 00:15:10.412 "base_bdevs_list": [ 00:15:10.412 { 00:15:10.412 "name": null, 00:15:10.412 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:10.412 "is_configured": false, 00:15:10.412 "data_offset": 0, 00:15:10.412 "data_size": 63488 00:15:10.412 }, 00:15:10.412 { 00:15:10.412 "name": "BaseBdev2", 00:15:10.412 "uuid": "ebbca6dc-0c3d-574b-80ea-d86d63c8001b", 00:15:10.412 "is_configured": true, 00:15:10.412 "data_offset": 2048, 00:15:10.412 "data_size": 63488 00:15:10.412 } 00:15:10.412 ] 00:15:10.412 }' 00:15:10.412 21:42:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:10.412 21:42:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:10.412 21:42:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:10.672 21:42:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:10.672 21:42:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:10.672 21:42:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # local es=0 00:15:10.673 21:42:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:10.673 21:42:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:15:10.673 21:42:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:10.673 21:42:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:15:10.673 21:42:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:10.673 21:42:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:10.673 21:42:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.673 21:42:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:10.673 [2024-12-10 21:42:11.248646] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:10.673 [2024-12-10 21:42:11.248875] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:15:10.673 [2024-12-10 21:42:11.248947] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:10.673 request: 00:15:10.673 { 00:15:10.673 "base_bdev": "BaseBdev1", 00:15:10.673 "raid_bdev": "raid_bdev1", 00:15:10.673 "method": "bdev_raid_add_base_bdev", 00:15:10.673 "req_id": 1 00:15:10.673 } 00:15:10.673 Got JSON-RPC error response 00:15:10.673 response: 00:15:10.673 { 00:15:10.673 "code": -22, 00:15:10.673 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:15:10.673 } 00:15:10.673 21:42:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:15:10.673 21:42:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # es=1 00:15:10.673 21:42:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:10.673 21:42:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:10.673 21:42:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:10.673 21:42:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:15:11.611 21:42:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:11.611 21:42:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:11.611 21:42:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:11.611 21:42:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:11.611 21:42:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:11.611 21:42:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:11.611 21:42:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:11.611 21:42:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:11.611 21:42:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:11.611 21:42:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:11.612 21:42:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:11.612 21:42:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.612 21:42:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:11.612 21:42:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:11.612 21:42:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.612 21:42:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:11.612 "name": "raid_bdev1", 00:15:11.612 "uuid": "bda4e49e-a071-45e9-b41d-7ccac321e1ef", 00:15:11.612 "strip_size_kb": 0, 00:15:11.612 "state": "online", 00:15:11.612 "raid_level": "raid1", 00:15:11.612 "superblock": true, 00:15:11.612 "num_base_bdevs": 2, 00:15:11.612 "num_base_bdevs_discovered": 1, 00:15:11.612 "num_base_bdevs_operational": 1, 00:15:11.612 "base_bdevs_list": [ 00:15:11.612 { 00:15:11.612 "name": null, 00:15:11.612 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:11.612 "is_configured": false, 00:15:11.612 "data_offset": 0, 00:15:11.612 "data_size": 63488 00:15:11.612 }, 00:15:11.612 { 00:15:11.612 "name": "BaseBdev2", 00:15:11.612 "uuid": "ebbca6dc-0c3d-574b-80ea-d86d63c8001b", 00:15:11.612 "is_configured": true, 00:15:11.612 "data_offset": 2048, 00:15:11.612 "data_size": 63488 00:15:11.612 } 00:15:11.612 ] 00:15:11.612 }' 00:15:11.612 21:42:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:11.612 21:42:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:12.181 21:42:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:12.181 21:42:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:12.181 21:42:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:12.181 21:42:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:12.181 21:42:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:12.181 21:42:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:12.181 21:42:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.181 21:42:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:12.181 21:42:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:12.181 21:42:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.181 21:42:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:12.181 "name": "raid_bdev1", 00:15:12.181 "uuid": "bda4e49e-a071-45e9-b41d-7ccac321e1ef", 00:15:12.181 "strip_size_kb": 0, 00:15:12.181 "state": "online", 00:15:12.181 "raid_level": "raid1", 00:15:12.181 "superblock": true, 00:15:12.181 "num_base_bdevs": 2, 00:15:12.181 "num_base_bdevs_discovered": 1, 00:15:12.181 "num_base_bdevs_operational": 1, 00:15:12.181 "base_bdevs_list": [ 00:15:12.181 { 00:15:12.181 "name": null, 00:15:12.181 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:12.181 "is_configured": false, 00:15:12.181 "data_offset": 0, 00:15:12.181 "data_size": 63488 00:15:12.181 }, 00:15:12.181 { 00:15:12.181 "name": "BaseBdev2", 00:15:12.181 "uuid": "ebbca6dc-0c3d-574b-80ea-d86d63c8001b", 00:15:12.181 "is_configured": true, 00:15:12.181 "data_offset": 2048, 00:15:12.181 "data_size": 63488 00:15:12.181 } 00:15:12.181 ] 00:15:12.181 }' 00:15:12.181 21:42:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:12.181 21:42:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:12.181 21:42:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:12.181 21:42:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:12.181 21:42:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 77025 00:15:12.181 21:42:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # '[' -z 77025 ']' 00:15:12.181 21:42:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # kill -0 77025 00:15:12.181 21:42:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # uname 00:15:12.181 21:42:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:12.181 21:42:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77025 00:15:12.181 21:42:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:12.181 21:42:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:12.181 killing process with pid 77025 00:15:12.181 Received shutdown signal, test time was about 17.081091 seconds 00:15:12.181 00:15:12.181 Latency(us) 00:15:12.181 [2024-12-10T21:42:12.964Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:12.181 [2024-12-10T21:42:12.964Z] =================================================================================================================== 00:15:12.181 [2024-12-10T21:42:12.964Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:12.181 21:42:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77025' 00:15:12.181 21:42:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@973 -- # kill 77025 00:15:12.181 [2024-12-10 21:42:12.902271] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:12.181 [2024-12-10 21:42:12.902410] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:12.181 21:42:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@978 -- # wait 77025 00:15:12.181 [2024-12-10 21:42:12.902483] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:12.181 [2024-12-10 21:42:12.902497] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:15:12.440 [2024-12-10 21:42:13.151268] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:13.832 21:42:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:15:13.832 00:15:13.832 real 0m20.307s 00:15:13.832 user 0m26.599s 00:15:13.832 sys 0m2.187s 00:15:13.832 21:42:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:13.832 21:42:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:13.832 ************************************ 00:15:13.832 END TEST raid_rebuild_test_sb_io 00:15:13.832 ************************************ 00:15:13.832 21:42:14 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:15:13.832 21:42:14 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 4 false false true 00:15:13.832 21:42:14 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:15:13.832 21:42:14 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:13.832 21:42:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:13.832 ************************************ 00:15:13.832 START TEST raid_rebuild_test 00:15:13.832 ************************************ 00:15:13.832 21:42:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 false false true 00:15:13.832 21:42:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:15:13.832 21:42:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:15:13.832 21:42:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:15:13.832 21:42:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:15:13.832 21:42:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:15:13.832 21:42:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:15:13.832 21:42:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:13.832 21:42:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:15:13.832 21:42:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:13.832 21:42:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:13.832 21:42:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:15:13.832 21:42:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:13.832 21:42:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:13.832 21:42:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:15:13.832 21:42:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:13.832 21:42:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:13.832 21:42:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:15:13.832 21:42:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:13.832 21:42:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:13.832 21:42:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:13.832 21:42:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:15:13.832 21:42:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:15:13.832 21:42:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:15:13.832 21:42:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:15:13.832 21:42:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:15:13.832 21:42:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:15:13.832 21:42:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:15:13.832 21:42:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:15:13.832 21:42:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:15:13.832 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:13.832 21:42:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=77714 00:15:13.832 21:42:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 77714 00:15:13.832 21:42:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:15:13.832 21:42:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 77714 ']' 00:15:13.832 21:42:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:13.832 21:42:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:13.832 21:42:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:13.832 21:42:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:13.832 21:42:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.832 [2024-12-10 21:42:14.540468] Starting SPDK v25.01-pre git sha1 cec5ba284 / DPDK 24.03.0 initialization... 00:15:13.832 [2024-12-10 21:42:14.540666] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:15:13.832 Zero copy mechanism will not be used. 00:15:13.832 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77714 ] 00:15:14.092 [2024-12-10 21:42:14.695031] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:14.092 [2024-12-10 21:42:14.811469] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:15:14.352 [2024-12-10 21:42:15.018733] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:14.352 [2024-12-10 21:42:15.018868] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:14.610 21:42:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:14.610 21:42:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:15:14.610 21:42:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:14.610 21:42:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:14.610 21:42:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.610 21:42:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.869 BaseBdev1_malloc 00:15:14.869 21:42:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.869 21:42:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:14.869 21:42:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.869 21:42:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.869 [2024-12-10 21:42:15.441375] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:14.869 [2024-12-10 21:42:15.441478] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:14.869 [2024-12-10 21:42:15.441509] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:14.869 [2024-12-10 21:42:15.441526] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:14.869 [2024-12-10 21:42:15.444359] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:14.869 [2024-12-10 21:42:15.444414] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:14.869 BaseBdev1 00:15:14.869 21:42:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.869 21:42:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:14.869 21:42:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:14.869 21:42:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.869 21:42:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.869 BaseBdev2_malloc 00:15:14.869 21:42:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.869 21:42:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:15:14.869 21:42:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.869 21:42:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.870 [2024-12-10 21:42:15.497609] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:15:14.870 [2024-12-10 21:42:15.497730] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:14.870 [2024-12-10 21:42:15.497754] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:14.870 [2024-12-10 21:42:15.497768] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:14.870 [2024-12-10 21:42:15.500069] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:14.870 [2024-12-10 21:42:15.500110] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:14.870 BaseBdev2 00:15:14.870 21:42:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.870 21:42:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:14.870 21:42:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:14.870 21:42:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.870 21:42:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.870 BaseBdev3_malloc 00:15:14.870 21:42:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.870 21:42:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:15:14.870 21:42:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.870 21:42:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.870 [2024-12-10 21:42:15.565142] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:15:14.870 [2024-12-10 21:42:15.565270] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:14.870 [2024-12-10 21:42:15.565300] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:14.870 [2024-12-10 21:42:15.565313] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:14.870 [2024-12-10 21:42:15.567707] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:14.870 [2024-12-10 21:42:15.567751] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:14.870 BaseBdev3 00:15:14.870 21:42:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.870 21:42:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:14.870 21:42:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:15:14.870 21:42:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.870 21:42:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.870 BaseBdev4_malloc 00:15:14.870 21:42:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.870 21:42:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:15:14.870 21:42:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.870 21:42:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.870 [2024-12-10 21:42:15.619611] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:15:14.870 [2024-12-10 21:42:15.619746] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:14.870 [2024-12-10 21:42:15.619782] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:15:14.870 [2024-12-10 21:42:15.619794] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:14.870 [2024-12-10 21:42:15.622314] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:14.870 [2024-12-10 21:42:15.622356] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:15:14.870 BaseBdev4 00:15:14.870 21:42:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.870 21:42:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:15:14.870 21:42:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.870 21:42:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.130 spare_malloc 00:15:15.130 21:42:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.130 21:42:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:15:15.130 21:42:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.130 21:42:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.130 spare_delay 00:15:15.130 21:42:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.130 21:42:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:15.130 21:42:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.130 21:42:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.130 [2024-12-10 21:42:15.686291] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:15.130 [2024-12-10 21:42:15.686350] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:15.130 [2024-12-10 21:42:15.686392] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:15:15.130 [2024-12-10 21:42:15.686403] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:15.130 [2024-12-10 21:42:15.688788] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:15.130 [2024-12-10 21:42:15.688905] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:15.130 spare 00:15:15.130 21:42:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.130 21:42:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:15:15.130 21:42:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.130 21:42:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.130 [2024-12-10 21:42:15.698307] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:15.130 [2024-12-10 21:42:15.700275] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:15.130 [2024-12-10 21:42:15.700406] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:15.130 [2024-12-10 21:42:15.700492] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:15.130 [2024-12-10 21:42:15.700597] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:15.130 [2024-12-10 21:42:15.700615] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:15:15.130 [2024-12-10 21:42:15.700886] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:15:15.130 [2024-12-10 21:42:15.701070] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:15.130 [2024-12-10 21:42:15.701082] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:15.130 [2024-12-10 21:42:15.701236] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:15.130 21:42:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.130 21:42:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:15:15.130 21:42:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:15.130 21:42:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:15.130 21:42:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:15.130 21:42:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:15.130 21:42:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:15.130 21:42:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:15.130 21:42:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:15.130 21:42:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:15.130 21:42:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:15.130 21:42:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:15.130 21:42:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.130 21:42:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.130 21:42:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:15.130 21:42:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.130 21:42:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:15.130 "name": "raid_bdev1", 00:15:15.130 "uuid": "9e9c9dbf-4359-4280-9874-64d4c7304365", 00:15:15.130 "strip_size_kb": 0, 00:15:15.130 "state": "online", 00:15:15.130 "raid_level": "raid1", 00:15:15.130 "superblock": false, 00:15:15.130 "num_base_bdevs": 4, 00:15:15.130 "num_base_bdevs_discovered": 4, 00:15:15.130 "num_base_bdevs_operational": 4, 00:15:15.130 "base_bdevs_list": [ 00:15:15.130 { 00:15:15.130 "name": "BaseBdev1", 00:15:15.130 "uuid": "4543bc60-e40d-5588-aa74-12cb4f451c88", 00:15:15.130 "is_configured": true, 00:15:15.130 "data_offset": 0, 00:15:15.130 "data_size": 65536 00:15:15.130 }, 00:15:15.130 { 00:15:15.130 "name": "BaseBdev2", 00:15:15.130 "uuid": "6b320be4-ad70-5a82-85be-0377df13c273", 00:15:15.130 "is_configured": true, 00:15:15.130 "data_offset": 0, 00:15:15.130 "data_size": 65536 00:15:15.130 }, 00:15:15.130 { 00:15:15.130 "name": "BaseBdev3", 00:15:15.130 "uuid": "e1839384-cf7a-5879-bfcb-35497f57b01b", 00:15:15.130 "is_configured": true, 00:15:15.130 "data_offset": 0, 00:15:15.130 "data_size": 65536 00:15:15.130 }, 00:15:15.130 { 00:15:15.130 "name": "BaseBdev4", 00:15:15.130 "uuid": "5c2552e7-c6c4-507a-9beb-03c341b75af7", 00:15:15.130 "is_configured": true, 00:15:15.130 "data_offset": 0, 00:15:15.130 "data_size": 65536 00:15:15.130 } 00:15:15.130 ] 00:15:15.130 }' 00:15:15.130 21:42:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:15.130 21:42:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.390 21:42:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:15.390 21:42:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.390 21:42:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.390 21:42:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:15:15.390 [2024-12-10 21:42:16.141972] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:15.390 21:42:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.650 21:42:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:15:15.650 21:42:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:15:15.650 21:42:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:15.650 21:42:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.650 21:42:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.650 21:42:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.650 21:42:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:15:15.650 21:42:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:15:15.650 21:42:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:15:15.650 21:42:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:15:15.650 21:42:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:15:15.650 21:42:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:15.650 21:42:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:15:15.650 21:42:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:15.650 21:42:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:15.650 21:42:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:15.650 21:42:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:15:15.650 21:42:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:15.650 21:42:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:15.650 21:42:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:15:15.650 [2024-12-10 21:42:16.429154] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:15:15.910 /dev/nbd0 00:15:15.910 21:42:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:15.910 21:42:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:15.910 21:42:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:15.910 21:42:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:15:15.910 21:42:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:15.910 21:42:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:15.910 21:42:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:15.910 21:42:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:15:15.910 21:42:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:15.910 21:42:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:15.910 21:42:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:15.910 1+0 records in 00:15:15.910 1+0 records out 00:15:15.910 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000565332 s, 7.2 MB/s 00:15:15.910 21:42:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:15.910 21:42:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:15:15.910 21:42:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:15.910 21:42:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:15.910 21:42:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:15:15.910 21:42:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:15.910 21:42:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:15.910 21:42:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:15:15.910 21:42:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:15:15.910 21:42:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:15:22.482 65536+0 records in 00:15:22.482 65536+0 records out 00:15:22.482 33554432 bytes (34 MB, 32 MiB) copied, 5.85111 s, 5.7 MB/s 00:15:22.482 21:42:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:22.482 21:42:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:22.482 21:42:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:22.482 21:42:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:22.483 21:42:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:15:22.483 21:42:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:22.483 21:42:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:22.483 [2024-12-10 21:42:22.557026] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:22.483 21:42:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:22.483 21:42:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:22.483 21:42:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:22.483 21:42:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:22.483 21:42:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:22.483 21:42:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:22.483 21:42:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:15:22.483 21:42:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:15:22.483 21:42:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:15:22.483 21:42:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.483 21:42:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.483 [2024-12-10 21:42:22.589036] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:22.483 21:42:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.483 21:42:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:22.483 21:42:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:22.483 21:42:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:22.483 21:42:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:22.483 21:42:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:22.483 21:42:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:22.483 21:42:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:22.483 21:42:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:22.483 21:42:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:22.483 21:42:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:22.483 21:42:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:22.483 21:42:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:22.483 21:42:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.483 21:42:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.483 21:42:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.483 21:42:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:22.483 "name": "raid_bdev1", 00:15:22.483 "uuid": "9e9c9dbf-4359-4280-9874-64d4c7304365", 00:15:22.483 "strip_size_kb": 0, 00:15:22.483 "state": "online", 00:15:22.483 "raid_level": "raid1", 00:15:22.483 "superblock": false, 00:15:22.483 "num_base_bdevs": 4, 00:15:22.483 "num_base_bdevs_discovered": 3, 00:15:22.483 "num_base_bdevs_operational": 3, 00:15:22.483 "base_bdevs_list": [ 00:15:22.483 { 00:15:22.483 "name": null, 00:15:22.483 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:22.483 "is_configured": false, 00:15:22.483 "data_offset": 0, 00:15:22.483 "data_size": 65536 00:15:22.483 }, 00:15:22.483 { 00:15:22.483 "name": "BaseBdev2", 00:15:22.483 "uuid": "6b320be4-ad70-5a82-85be-0377df13c273", 00:15:22.483 "is_configured": true, 00:15:22.483 "data_offset": 0, 00:15:22.483 "data_size": 65536 00:15:22.483 }, 00:15:22.483 { 00:15:22.483 "name": "BaseBdev3", 00:15:22.483 "uuid": "e1839384-cf7a-5879-bfcb-35497f57b01b", 00:15:22.483 "is_configured": true, 00:15:22.483 "data_offset": 0, 00:15:22.483 "data_size": 65536 00:15:22.483 }, 00:15:22.483 { 00:15:22.483 "name": "BaseBdev4", 00:15:22.483 "uuid": "5c2552e7-c6c4-507a-9beb-03c341b75af7", 00:15:22.483 "is_configured": true, 00:15:22.483 "data_offset": 0, 00:15:22.483 "data_size": 65536 00:15:22.483 } 00:15:22.483 ] 00:15:22.483 }' 00:15:22.483 21:42:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:22.483 21:42:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.483 21:42:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:22.483 21:42:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.483 21:42:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.483 [2024-12-10 21:42:22.988395] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:22.483 [2024-12-10 21:42:23.005899] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09d70 00:15:22.483 21:42:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.483 21:42:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:15:22.483 [2024-12-10 21:42:23.008055] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:23.423 21:42:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:23.423 21:42:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:23.423 21:42:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:23.423 21:42:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:23.423 21:42:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:23.423 21:42:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:23.423 21:42:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:23.423 21:42:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.423 21:42:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.423 21:42:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.423 21:42:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:23.423 "name": "raid_bdev1", 00:15:23.423 "uuid": "9e9c9dbf-4359-4280-9874-64d4c7304365", 00:15:23.423 "strip_size_kb": 0, 00:15:23.423 "state": "online", 00:15:23.423 "raid_level": "raid1", 00:15:23.423 "superblock": false, 00:15:23.423 "num_base_bdevs": 4, 00:15:23.423 "num_base_bdevs_discovered": 4, 00:15:23.423 "num_base_bdevs_operational": 4, 00:15:23.423 "process": { 00:15:23.423 "type": "rebuild", 00:15:23.423 "target": "spare", 00:15:23.423 "progress": { 00:15:23.423 "blocks": 20480, 00:15:23.423 "percent": 31 00:15:23.423 } 00:15:23.423 }, 00:15:23.423 "base_bdevs_list": [ 00:15:23.423 { 00:15:23.423 "name": "spare", 00:15:23.423 "uuid": "e29dcbe7-b855-5e6e-84c0-108ceb009b44", 00:15:23.423 "is_configured": true, 00:15:23.423 "data_offset": 0, 00:15:23.423 "data_size": 65536 00:15:23.423 }, 00:15:23.423 { 00:15:23.423 "name": "BaseBdev2", 00:15:23.423 "uuid": "6b320be4-ad70-5a82-85be-0377df13c273", 00:15:23.423 "is_configured": true, 00:15:23.423 "data_offset": 0, 00:15:23.423 "data_size": 65536 00:15:23.423 }, 00:15:23.423 { 00:15:23.423 "name": "BaseBdev3", 00:15:23.423 "uuid": "e1839384-cf7a-5879-bfcb-35497f57b01b", 00:15:23.423 "is_configured": true, 00:15:23.423 "data_offset": 0, 00:15:23.423 "data_size": 65536 00:15:23.423 }, 00:15:23.423 { 00:15:23.423 "name": "BaseBdev4", 00:15:23.423 "uuid": "5c2552e7-c6c4-507a-9beb-03c341b75af7", 00:15:23.423 "is_configured": true, 00:15:23.423 "data_offset": 0, 00:15:23.424 "data_size": 65536 00:15:23.424 } 00:15:23.424 ] 00:15:23.424 }' 00:15:23.424 21:42:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:23.424 21:42:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:23.424 21:42:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:23.424 21:42:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:23.424 21:42:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:23.424 21:42:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.424 21:42:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.424 [2024-12-10 21:42:24.163064] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:23.684 [2024-12-10 21:42:24.214027] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:23.684 [2024-12-10 21:42:24.214130] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:23.684 [2024-12-10 21:42:24.214150] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:23.684 [2024-12-10 21:42:24.214159] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:23.684 21:42:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.684 21:42:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:23.684 21:42:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:23.684 21:42:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:23.684 21:42:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:23.684 21:42:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:23.684 21:42:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:23.684 21:42:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:23.684 21:42:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:23.684 21:42:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:23.684 21:42:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:23.684 21:42:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:23.684 21:42:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:23.684 21:42:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.684 21:42:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.684 21:42:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.684 21:42:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:23.684 "name": "raid_bdev1", 00:15:23.684 "uuid": "9e9c9dbf-4359-4280-9874-64d4c7304365", 00:15:23.684 "strip_size_kb": 0, 00:15:23.684 "state": "online", 00:15:23.684 "raid_level": "raid1", 00:15:23.684 "superblock": false, 00:15:23.684 "num_base_bdevs": 4, 00:15:23.684 "num_base_bdevs_discovered": 3, 00:15:23.684 "num_base_bdevs_operational": 3, 00:15:23.684 "base_bdevs_list": [ 00:15:23.684 { 00:15:23.684 "name": null, 00:15:23.684 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:23.684 "is_configured": false, 00:15:23.684 "data_offset": 0, 00:15:23.684 "data_size": 65536 00:15:23.684 }, 00:15:23.684 { 00:15:23.684 "name": "BaseBdev2", 00:15:23.684 "uuid": "6b320be4-ad70-5a82-85be-0377df13c273", 00:15:23.684 "is_configured": true, 00:15:23.684 "data_offset": 0, 00:15:23.684 "data_size": 65536 00:15:23.684 }, 00:15:23.684 { 00:15:23.684 "name": "BaseBdev3", 00:15:23.684 "uuid": "e1839384-cf7a-5879-bfcb-35497f57b01b", 00:15:23.684 "is_configured": true, 00:15:23.684 "data_offset": 0, 00:15:23.684 "data_size": 65536 00:15:23.684 }, 00:15:23.684 { 00:15:23.684 "name": "BaseBdev4", 00:15:23.684 "uuid": "5c2552e7-c6c4-507a-9beb-03c341b75af7", 00:15:23.684 "is_configured": true, 00:15:23.684 "data_offset": 0, 00:15:23.684 "data_size": 65536 00:15:23.684 } 00:15:23.684 ] 00:15:23.684 }' 00:15:23.684 21:42:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:23.684 21:42:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.944 21:42:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:23.944 21:42:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:23.944 21:42:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:23.944 21:42:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:23.944 21:42:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:23.944 21:42:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:23.944 21:42:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:23.944 21:42:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.944 21:42:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.944 21:42:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.944 21:42:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:23.944 "name": "raid_bdev1", 00:15:23.944 "uuid": "9e9c9dbf-4359-4280-9874-64d4c7304365", 00:15:23.944 "strip_size_kb": 0, 00:15:23.944 "state": "online", 00:15:23.944 "raid_level": "raid1", 00:15:23.944 "superblock": false, 00:15:23.944 "num_base_bdevs": 4, 00:15:23.944 "num_base_bdevs_discovered": 3, 00:15:23.944 "num_base_bdevs_operational": 3, 00:15:23.944 "base_bdevs_list": [ 00:15:23.944 { 00:15:23.944 "name": null, 00:15:23.944 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:23.944 "is_configured": false, 00:15:23.944 "data_offset": 0, 00:15:23.944 "data_size": 65536 00:15:23.944 }, 00:15:23.944 { 00:15:23.944 "name": "BaseBdev2", 00:15:23.944 "uuid": "6b320be4-ad70-5a82-85be-0377df13c273", 00:15:23.944 "is_configured": true, 00:15:23.944 "data_offset": 0, 00:15:23.944 "data_size": 65536 00:15:23.944 }, 00:15:23.944 { 00:15:23.944 "name": "BaseBdev3", 00:15:23.944 "uuid": "e1839384-cf7a-5879-bfcb-35497f57b01b", 00:15:23.944 "is_configured": true, 00:15:23.944 "data_offset": 0, 00:15:23.944 "data_size": 65536 00:15:23.944 }, 00:15:23.944 { 00:15:23.944 "name": "BaseBdev4", 00:15:23.944 "uuid": "5c2552e7-c6c4-507a-9beb-03c341b75af7", 00:15:23.944 "is_configured": true, 00:15:23.944 "data_offset": 0, 00:15:23.944 "data_size": 65536 00:15:23.944 } 00:15:23.944 ] 00:15:23.944 }' 00:15:23.944 21:42:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:24.204 21:42:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:24.204 21:42:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:24.204 21:42:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:24.204 21:42:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:24.204 21:42:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.204 21:42:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.204 [2024-12-10 21:42:24.819880] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:24.204 [2024-12-10 21:42:24.835404] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09e40 00:15:24.204 21:42:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.204 21:42:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:15:24.204 [2024-12-10 21:42:24.837513] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:25.142 21:42:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:25.142 21:42:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:25.142 21:42:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:25.142 21:42:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:25.142 21:42:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:25.142 21:42:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:25.142 21:42:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:25.142 21:42:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.142 21:42:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.142 21:42:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.142 21:42:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:25.142 "name": "raid_bdev1", 00:15:25.142 "uuid": "9e9c9dbf-4359-4280-9874-64d4c7304365", 00:15:25.142 "strip_size_kb": 0, 00:15:25.142 "state": "online", 00:15:25.142 "raid_level": "raid1", 00:15:25.142 "superblock": false, 00:15:25.142 "num_base_bdevs": 4, 00:15:25.142 "num_base_bdevs_discovered": 4, 00:15:25.142 "num_base_bdevs_operational": 4, 00:15:25.142 "process": { 00:15:25.142 "type": "rebuild", 00:15:25.142 "target": "spare", 00:15:25.142 "progress": { 00:15:25.142 "blocks": 20480, 00:15:25.142 "percent": 31 00:15:25.142 } 00:15:25.142 }, 00:15:25.142 "base_bdevs_list": [ 00:15:25.142 { 00:15:25.142 "name": "spare", 00:15:25.142 "uuid": "e29dcbe7-b855-5e6e-84c0-108ceb009b44", 00:15:25.142 "is_configured": true, 00:15:25.142 "data_offset": 0, 00:15:25.142 "data_size": 65536 00:15:25.142 }, 00:15:25.142 { 00:15:25.142 "name": "BaseBdev2", 00:15:25.142 "uuid": "6b320be4-ad70-5a82-85be-0377df13c273", 00:15:25.142 "is_configured": true, 00:15:25.142 "data_offset": 0, 00:15:25.142 "data_size": 65536 00:15:25.142 }, 00:15:25.142 { 00:15:25.142 "name": "BaseBdev3", 00:15:25.142 "uuid": "e1839384-cf7a-5879-bfcb-35497f57b01b", 00:15:25.142 "is_configured": true, 00:15:25.142 "data_offset": 0, 00:15:25.142 "data_size": 65536 00:15:25.142 }, 00:15:25.142 { 00:15:25.142 "name": "BaseBdev4", 00:15:25.142 "uuid": "5c2552e7-c6c4-507a-9beb-03c341b75af7", 00:15:25.142 "is_configured": true, 00:15:25.142 "data_offset": 0, 00:15:25.142 "data_size": 65536 00:15:25.142 } 00:15:25.142 ] 00:15:25.142 }' 00:15:25.142 21:42:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:25.402 21:42:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:25.402 21:42:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:25.402 21:42:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:25.402 21:42:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:15:25.402 21:42:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:15:25.402 21:42:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:15:25.402 21:42:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:15:25.402 21:42:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:15:25.402 21:42:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.402 21:42:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.402 [2024-12-10 21:42:25.980969] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:25.402 [2024-12-10 21:42:26.043315] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000d09e40 00:15:25.402 21:42:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.402 21:42:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:15:25.402 21:42:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:15:25.402 21:42:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:25.402 21:42:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:25.402 21:42:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:25.402 21:42:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:25.402 21:42:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:25.402 21:42:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:25.402 21:42:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:25.402 21:42:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.402 21:42:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.402 21:42:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.402 21:42:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:25.402 "name": "raid_bdev1", 00:15:25.402 "uuid": "9e9c9dbf-4359-4280-9874-64d4c7304365", 00:15:25.402 "strip_size_kb": 0, 00:15:25.402 "state": "online", 00:15:25.402 "raid_level": "raid1", 00:15:25.402 "superblock": false, 00:15:25.402 "num_base_bdevs": 4, 00:15:25.402 "num_base_bdevs_discovered": 3, 00:15:25.402 "num_base_bdevs_operational": 3, 00:15:25.402 "process": { 00:15:25.402 "type": "rebuild", 00:15:25.402 "target": "spare", 00:15:25.402 "progress": { 00:15:25.402 "blocks": 24576, 00:15:25.402 "percent": 37 00:15:25.402 } 00:15:25.402 }, 00:15:25.402 "base_bdevs_list": [ 00:15:25.402 { 00:15:25.402 "name": "spare", 00:15:25.402 "uuid": "e29dcbe7-b855-5e6e-84c0-108ceb009b44", 00:15:25.402 "is_configured": true, 00:15:25.402 "data_offset": 0, 00:15:25.402 "data_size": 65536 00:15:25.402 }, 00:15:25.402 { 00:15:25.402 "name": null, 00:15:25.402 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:25.402 "is_configured": false, 00:15:25.402 "data_offset": 0, 00:15:25.402 "data_size": 65536 00:15:25.402 }, 00:15:25.402 { 00:15:25.402 "name": "BaseBdev3", 00:15:25.402 "uuid": "e1839384-cf7a-5879-bfcb-35497f57b01b", 00:15:25.402 "is_configured": true, 00:15:25.402 "data_offset": 0, 00:15:25.402 "data_size": 65536 00:15:25.402 }, 00:15:25.402 { 00:15:25.402 "name": "BaseBdev4", 00:15:25.402 "uuid": "5c2552e7-c6c4-507a-9beb-03c341b75af7", 00:15:25.402 "is_configured": true, 00:15:25.402 "data_offset": 0, 00:15:25.402 "data_size": 65536 00:15:25.402 } 00:15:25.402 ] 00:15:25.402 }' 00:15:25.402 21:42:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:25.402 21:42:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:25.402 21:42:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:25.662 21:42:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:25.662 21:42:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=458 00:15:25.662 21:42:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:25.662 21:42:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:25.662 21:42:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:25.662 21:42:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:25.662 21:42:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:25.662 21:42:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:25.662 21:42:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:25.662 21:42:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:25.662 21:42:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.662 21:42:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.662 21:42:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.662 21:42:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:25.662 "name": "raid_bdev1", 00:15:25.662 "uuid": "9e9c9dbf-4359-4280-9874-64d4c7304365", 00:15:25.662 "strip_size_kb": 0, 00:15:25.662 "state": "online", 00:15:25.662 "raid_level": "raid1", 00:15:25.662 "superblock": false, 00:15:25.662 "num_base_bdevs": 4, 00:15:25.662 "num_base_bdevs_discovered": 3, 00:15:25.662 "num_base_bdevs_operational": 3, 00:15:25.662 "process": { 00:15:25.662 "type": "rebuild", 00:15:25.662 "target": "spare", 00:15:25.662 "progress": { 00:15:25.662 "blocks": 26624, 00:15:25.662 "percent": 40 00:15:25.662 } 00:15:25.662 }, 00:15:25.662 "base_bdevs_list": [ 00:15:25.662 { 00:15:25.662 "name": "spare", 00:15:25.662 "uuid": "e29dcbe7-b855-5e6e-84c0-108ceb009b44", 00:15:25.662 "is_configured": true, 00:15:25.662 "data_offset": 0, 00:15:25.662 "data_size": 65536 00:15:25.662 }, 00:15:25.662 { 00:15:25.662 "name": null, 00:15:25.662 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:25.662 "is_configured": false, 00:15:25.662 "data_offset": 0, 00:15:25.662 "data_size": 65536 00:15:25.662 }, 00:15:25.662 { 00:15:25.662 "name": "BaseBdev3", 00:15:25.662 "uuid": "e1839384-cf7a-5879-bfcb-35497f57b01b", 00:15:25.662 "is_configured": true, 00:15:25.662 "data_offset": 0, 00:15:25.662 "data_size": 65536 00:15:25.662 }, 00:15:25.662 { 00:15:25.662 "name": "BaseBdev4", 00:15:25.662 "uuid": "5c2552e7-c6c4-507a-9beb-03c341b75af7", 00:15:25.662 "is_configured": true, 00:15:25.662 "data_offset": 0, 00:15:25.662 "data_size": 65536 00:15:25.662 } 00:15:25.663 ] 00:15:25.663 }' 00:15:25.663 21:42:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:25.663 21:42:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:25.663 21:42:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:25.663 21:42:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:25.663 21:42:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:26.601 21:42:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:26.601 21:42:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:26.601 21:42:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:26.601 21:42:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:26.601 21:42:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:26.601 21:42:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:26.601 21:42:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:26.601 21:42:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:26.601 21:42:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.601 21:42:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.601 21:42:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.860 21:42:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:26.860 "name": "raid_bdev1", 00:15:26.860 "uuid": "9e9c9dbf-4359-4280-9874-64d4c7304365", 00:15:26.860 "strip_size_kb": 0, 00:15:26.860 "state": "online", 00:15:26.860 "raid_level": "raid1", 00:15:26.860 "superblock": false, 00:15:26.860 "num_base_bdevs": 4, 00:15:26.860 "num_base_bdevs_discovered": 3, 00:15:26.860 "num_base_bdevs_operational": 3, 00:15:26.860 "process": { 00:15:26.860 "type": "rebuild", 00:15:26.860 "target": "spare", 00:15:26.860 "progress": { 00:15:26.860 "blocks": 49152, 00:15:26.860 "percent": 75 00:15:26.860 } 00:15:26.860 }, 00:15:26.860 "base_bdevs_list": [ 00:15:26.860 { 00:15:26.860 "name": "spare", 00:15:26.860 "uuid": "e29dcbe7-b855-5e6e-84c0-108ceb009b44", 00:15:26.860 "is_configured": true, 00:15:26.860 "data_offset": 0, 00:15:26.860 "data_size": 65536 00:15:26.860 }, 00:15:26.860 { 00:15:26.860 "name": null, 00:15:26.860 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:26.860 "is_configured": false, 00:15:26.860 "data_offset": 0, 00:15:26.860 "data_size": 65536 00:15:26.860 }, 00:15:26.860 { 00:15:26.860 "name": "BaseBdev3", 00:15:26.860 "uuid": "e1839384-cf7a-5879-bfcb-35497f57b01b", 00:15:26.860 "is_configured": true, 00:15:26.860 "data_offset": 0, 00:15:26.860 "data_size": 65536 00:15:26.860 }, 00:15:26.860 { 00:15:26.860 "name": "BaseBdev4", 00:15:26.860 "uuid": "5c2552e7-c6c4-507a-9beb-03c341b75af7", 00:15:26.860 "is_configured": true, 00:15:26.860 "data_offset": 0, 00:15:26.860 "data_size": 65536 00:15:26.860 } 00:15:26.860 ] 00:15:26.860 }' 00:15:26.860 21:42:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:26.860 21:42:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:26.860 21:42:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:26.860 21:42:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:26.860 21:42:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:27.427 [2024-12-10 21:42:28.052918] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:15:27.427 [2024-12-10 21:42:28.053018] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:15:27.427 [2024-12-10 21:42:28.053081] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:27.996 21:42:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:27.996 21:42:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:27.996 21:42:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:27.996 21:42:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:27.996 21:42:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:27.996 21:42:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:27.996 21:42:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:27.996 21:42:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.996 21:42:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:27.996 21:42:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.996 21:42:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.996 21:42:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:27.996 "name": "raid_bdev1", 00:15:27.996 "uuid": "9e9c9dbf-4359-4280-9874-64d4c7304365", 00:15:27.996 "strip_size_kb": 0, 00:15:27.996 "state": "online", 00:15:27.996 "raid_level": "raid1", 00:15:27.996 "superblock": false, 00:15:27.996 "num_base_bdevs": 4, 00:15:27.996 "num_base_bdevs_discovered": 3, 00:15:27.996 "num_base_bdevs_operational": 3, 00:15:27.996 "base_bdevs_list": [ 00:15:27.996 { 00:15:27.996 "name": "spare", 00:15:27.996 "uuid": "e29dcbe7-b855-5e6e-84c0-108ceb009b44", 00:15:27.996 "is_configured": true, 00:15:27.996 "data_offset": 0, 00:15:27.996 "data_size": 65536 00:15:27.996 }, 00:15:27.996 { 00:15:27.996 "name": null, 00:15:27.996 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:27.996 "is_configured": false, 00:15:27.996 "data_offset": 0, 00:15:27.996 "data_size": 65536 00:15:27.996 }, 00:15:27.996 { 00:15:27.996 "name": "BaseBdev3", 00:15:27.996 "uuid": "e1839384-cf7a-5879-bfcb-35497f57b01b", 00:15:27.996 "is_configured": true, 00:15:27.996 "data_offset": 0, 00:15:27.996 "data_size": 65536 00:15:27.996 }, 00:15:27.996 { 00:15:27.996 "name": "BaseBdev4", 00:15:27.996 "uuid": "5c2552e7-c6c4-507a-9beb-03c341b75af7", 00:15:27.996 "is_configured": true, 00:15:27.996 "data_offset": 0, 00:15:27.996 "data_size": 65536 00:15:27.996 } 00:15:27.996 ] 00:15:27.996 }' 00:15:27.996 21:42:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:27.996 21:42:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:15:27.996 21:42:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:27.996 21:42:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:15:27.996 21:42:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:15:27.996 21:42:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:27.996 21:42:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:27.996 21:42:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:27.996 21:42:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:27.996 21:42:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:27.996 21:42:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:27.996 21:42:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:27.996 21:42:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.996 21:42:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.996 21:42:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.996 21:42:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:27.996 "name": "raid_bdev1", 00:15:27.996 "uuid": "9e9c9dbf-4359-4280-9874-64d4c7304365", 00:15:27.996 "strip_size_kb": 0, 00:15:27.996 "state": "online", 00:15:27.996 "raid_level": "raid1", 00:15:27.996 "superblock": false, 00:15:27.996 "num_base_bdevs": 4, 00:15:27.996 "num_base_bdevs_discovered": 3, 00:15:27.996 "num_base_bdevs_operational": 3, 00:15:27.996 "base_bdevs_list": [ 00:15:27.996 { 00:15:27.996 "name": "spare", 00:15:27.996 "uuid": "e29dcbe7-b855-5e6e-84c0-108ceb009b44", 00:15:27.996 "is_configured": true, 00:15:27.996 "data_offset": 0, 00:15:27.996 "data_size": 65536 00:15:27.996 }, 00:15:27.996 { 00:15:27.996 "name": null, 00:15:27.996 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:27.996 "is_configured": false, 00:15:27.996 "data_offset": 0, 00:15:27.996 "data_size": 65536 00:15:27.996 }, 00:15:27.996 { 00:15:27.996 "name": "BaseBdev3", 00:15:27.996 "uuid": "e1839384-cf7a-5879-bfcb-35497f57b01b", 00:15:27.996 "is_configured": true, 00:15:27.996 "data_offset": 0, 00:15:27.996 "data_size": 65536 00:15:27.996 }, 00:15:27.996 { 00:15:27.996 "name": "BaseBdev4", 00:15:27.996 "uuid": "5c2552e7-c6c4-507a-9beb-03c341b75af7", 00:15:27.997 "is_configured": true, 00:15:27.997 "data_offset": 0, 00:15:27.997 "data_size": 65536 00:15:27.997 } 00:15:27.997 ] 00:15:27.997 }' 00:15:27.997 21:42:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:27.997 21:42:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:27.997 21:42:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:28.265 21:42:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:28.265 21:42:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:28.265 21:42:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:28.265 21:42:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:28.265 21:42:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:28.265 21:42:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:28.265 21:42:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:28.265 21:42:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:28.265 21:42:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:28.265 21:42:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:28.265 21:42:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:28.265 21:42:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:28.265 21:42:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:28.265 21:42:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.265 21:42:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.265 21:42:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.265 21:42:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:28.265 "name": "raid_bdev1", 00:15:28.265 "uuid": "9e9c9dbf-4359-4280-9874-64d4c7304365", 00:15:28.265 "strip_size_kb": 0, 00:15:28.265 "state": "online", 00:15:28.265 "raid_level": "raid1", 00:15:28.265 "superblock": false, 00:15:28.265 "num_base_bdevs": 4, 00:15:28.265 "num_base_bdevs_discovered": 3, 00:15:28.265 "num_base_bdevs_operational": 3, 00:15:28.265 "base_bdevs_list": [ 00:15:28.265 { 00:15:28.265 "name": "spare", 00:15:28.265 "uuid": "e29dcbe7-b855-5e6e-84c0-108ceb009b44", 00:15:28.265 "is_configured": true, 00:15:28.265 "data_offset": 0, 00:15:28.265 "data_size": 65536 00:15:28.265 }, 00:15:28.265 { 00:15:28.266 "name": null, 00:15:28.266 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:28.266 "is_configured": false, 00:15:28.266 "data_offset": 0, 00:15:28.266 "data_size": 65536 00:15:28.266 }, 00:15:28.266 { 00:15:28.266 "name": "BaseBdev3", 00:15:28.266 "uuid": "e1839384-cf7a-5879-bfcb-35497f57b01b", 00:15:28.266 "is_configured": true, 00:15:28.266 "data_offset": 0, 00:15:28.266 "data_size": 65536 00:15:28.266 }, 00:15:28.266 { 00:15:28.266 "name": "BaseBdev4", 00:15:28.266 "uuid": "5c2552e7-c6c4-507a-9beb-03c341b75af7", 00:15:28.266 "is_configured": true, 00:15:28.266 "data_offset": 0, 00:15:28.266 "data_size": 65536 00:15:28.266 } 00:15:28.266 ] 00:15:28.266 }' 00:15:28.266 21:42:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:28.266 21:42:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.535 21:42:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:28.535 21:42:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.535 21:42:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.535 [2024-12-10 21:42:29.203320] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:28.535 [2024-12-10 21:42:29.203357] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:28.535 [2024-12-10 21:42:29.203448] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:28.535 [2024-12-10 21:42:29.203534] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:28.535 [2024-12-10 21:42:29.203544] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:28.535 21:42:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.535 21:42:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:28.535 21:42:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:15:28.535 21:42:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.535 21:42:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.535 21:42:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.535 21:42:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:15:28.535 21:42:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:15:28.535 21:42:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:15:28.535 21:42:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:15:28.535 21:42:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:28.535 21:42:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:15:28.535 21:42:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:28.535 21:42:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:28.535 21:42:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:28.535 21:42:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:15:28.535 21:42:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:28.535 21:42:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:28.535 21:42:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:15:28.795 /dev/nbd0 00:15:28.795 21:42:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:28.795 21:42:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:28.795 21:42:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:28.795 21:42:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:15:28.795 21:42:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:28.795 21:42:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:28.795 21:42:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:28.795 21:42:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:15:28.795 21:42:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:28.795 21:42:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:28.795 21:42:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:28.795 1+0 records in 00:15:28.795 1+0 records out 00:15:28.795 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000349338 s, 11.7 MB/s 00:15:28.795 21:42:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:28.795 21:42:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:15:28.795 21:42:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:28.795 21:42:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:28.795 21:42:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:15:28.795 21:42:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:28.795 21:42:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:28.795 21:42:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:15:29.055 /dev/nbd1 00:15:29.055 21:42:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:29.055 21:42:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:29.055 21:42:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:15:29.055 21:42:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:15:29.055 21:42:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:29.055 21:42:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:29.055 21:42:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:15:29.055 21:42:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:15:29.055 21:42:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:29.055 21:42:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:29.055 21:42:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:29.055 1+0 records in 00:15:29.055 1+0 records out 00:15:29.055 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000284302 s, 14.4 MB/s 00:15:29.055 21:42:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:29.055 21:42:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:15:29.055 21:42:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:29.055 21:42:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:29.055 21:42:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:15:29.055 21:42:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:29.055 21:42:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:29.055 21:42:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:15:29.314 21:42:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:15:29.314 21:42:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:29.314 21:42:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:29.314 21:42:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:29.314 21:42:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:15:29.314 21:42:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:29.314 21:42:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:29.573 21:42:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:29.573 21:42:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:29.573 21:42:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:29.573 21:42:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:29.573 21:42:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:29.573 21:42:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:29.573 21:42:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:15:29.573 21:42:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:15:29.573 21:42:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:29.573 21:42:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:29.833 21:42:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:29.833 21:42:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:29.833 21:42:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:29.833 21:42:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:29.833 21:42:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:29.833 21:42:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:29.833 21:42:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:15:29.833 21:42:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:15:29.833 21:42:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:15:29.833 21:42:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 77714 00:15:29.833 21:42:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 77714 ']' 00:15:29.833 21:42:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 77714 00:15:29.833 21:42:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:15:29.833 21:42:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:29.833 21:42:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77714 00:15:29.833 21:42:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:29.833 21:42:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:29.833 killing process with pid 77714 00:15:29.833 Received shutdown signal, test time was about 60.000000 seconds 00:15:29.833 00:15:29.833 Latency(us) 00:15:29.833 [2024-12-10T21:42:30.616Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:29.833 [2024-12-10T21:42:30.616Z] =================================================================================================================== 00:15:29.833 [2024-12-10T21:42:30.616Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:29.833 21:42:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77714' 00:15:29.833 21:42:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@973 -- # kill 77714 00:15:29.833 [2024-12-10 21:42:30.477672] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:29.833 21:42:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@978 -- # wait 77714 00:15:30.404 [2024-12-10 21:42:30.977296] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:31.786 21:42:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:15:31.786 00:15:31.786 real 0m17.703s 00:15:31.786 user 0m19.653s 00:15:31.786 sys 0m3.042s 00:15:31.786 21:42:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:31.786 21:42:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.786 ************************************ 00:15:31.786 END TEST raid_rebuild_test 00:15:31.786 ************************************ 00:15:31.786 21:42:32 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 4 true false true 00:15:31.786 21:42:32 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:15:31.786 21:42:32 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:31.786 21:42:32 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:31.786 ************************************ 00:15:31.786 START TEST raid_rebuild_test_sb 00:15:31.786 ************************************ 00:15:31.786 21:42:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 true false true 00:15:31.786 21:42:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:15:31.786 21:42:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:15:31.786 21:42:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:15:31.786 21:42:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:15:31.786 21:42:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:15:31.786 21:42:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:15:31.786 21:42:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:31.786 21:42:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:15:31.786 21:42:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:31.786 21:42:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:31.786 21:42:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:15:31.786 21:42:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:31.786 21:42:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:31.786 21:42:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:15:31.786 21:42:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:31.786 21:42:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:31.786 21:42:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:15:31.786 21:42:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:31.786 21:42:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:31.786 21:42:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:31.786 21:42:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:15:31.786 21:42:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:15:31.786 21:42:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:15:31.786 21:42:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:15:31.786 21:42:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:15:31.786 21:42:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:15:31.786 21:42:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:15:31.786 21:42:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:15:31.786 21:42:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:15:31.786 21:42:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:15:31.786 21:42:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=78160 00:15:31.786 21:42:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:15:31.786 21:42:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 78160 00:15:31.786 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:31.786 21:42:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 78160 ']' 00:15:31.786 21:42:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:31.786 21:42:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:31.786 21:42:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:31.786 21:42:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:31.786 21:42:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:31.786 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:31.786 Zero copy mechanism will not be used. 00:15:31.786 [2024-12-10 21:42:32.308770] Starting SPDK v25.01-pre git sha1 cec5ba284 / DPDK 24.03.0 initialization... 00:15:31.786 [2024-12-10 21:42:32.308892] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78160 ] 00:15:31.786 [2024-12-10 21:42:32.482020] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:32.046 [2024-12-10 21:42:32.606087] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:15:32.046 [2024-12-10 21:42:32.813065] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:32.046 [2024-12-10 21:42:32.813145] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:32.630 21:42:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:32.630 21:42:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:15:32.630 21:42:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:32.630 21:42:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:32.630 21:42:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.630 21:42:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:32.630 BaseBdev1_malloc 00:15:32.630 21:42:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.630 21:42:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:32.630 21:42:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.631 21:42:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:32.631 [2024-12-10 21:42:33.217374] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:32.631 [2024-12-10 21:42:33.217575] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:32.631 [2024-12-10 21:42:33.217612] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:32.631 [2024-12-10 21:42:33.217626] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:32.631 [2024-12-10 21:42:33.220244] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:32.631 [2024-12-10 21:42:33.220293] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:32.631 BaseBdev1 00:15:32.631 21:42:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.631 21:42:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:32.631 21:42:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:32.631 21:42:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.631 21:42:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:32.631 BaseBdev2_malloc 00:15:32.631 21:42:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.631 21:42:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:15:32.631 21:42:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.631 21:42:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:32.631 [2024-12-10 21:42:33.275314] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:15:32.631 [2024-12-10 21:42:33.275385] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:32.631 [2024-12-10 21:42:33.275405] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:32.631 [2024-12-10 21:42:33.275416] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:32.631 [2024-12-10 21:42:33.277624] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:32.631 [2024-12-10 21:42:33.277716] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:32.631 BaseBdev2 00:15:32.631 21:42:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.631 21:42:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:32.631 21:42:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:32.631 21:42:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.631 21:42:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:32.631 BaseBdev3_malloc 00:15:32.631 21:42:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.631 21:42:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:15:32.631 21:42:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.631 21:42:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:32.631 [2024-12-10 21:42:33.343264] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:15:32.631 [2024-12-10 21:42:33.343327] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:32.631 [2024-12-10 21:42:33.343368] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:32.631 [2024-12-10 21:42:33.343380] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:32.631 [2024-12-10 21:42:33.345746] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:32.631 [2024-12-10 21:42:33.345790] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:32.631 BaseBdev3 00:15:32.631 21:42:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.631 21:42:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:32.631 21:42:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:15:32.631 21:42:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.631 21:42:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:32.631 BaseBdev4_malloc 00:15:32.631 21:42:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.631 21:42:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:15:32.631 21:42:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.631 21:42:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:32.631 [2024-12-10 21:42:33.397785] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:15:32.631 [2024-12-10 21:42:33.397855] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:32.631 [2024-12-10 21:42:33.397877] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:15:32.631 [2024-12-10 21:42:33.397888] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:32.631 [2024-12-10 21:42:33.400210] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:32.631 [2024-12-10 21:42:33.400256] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:15:32.631 BaseBdev4 00:15:32.631 21:42:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.631 21:42:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:15:32.631 21:42:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.631 21:42:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:32.890 spare_malloc 00:15:32.890 21:42:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.890 21:42:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:15:32.890 21:42:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.890 21:42:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:32.890 spare_delay 00:15:32.890 21:42:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.890 21:42:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:32.890 21:42:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.891 21:42:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:32.891 [2024-12-10 21:42:33.465694] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:32.891 [2024-12-10 21:42:33.465758] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:32.891 [2024-12-10 21:42:33.465780] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:15:32.891 [2024-12-10 21:42:33.465792] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:32.891 [2024-12-10 21:42:33.468206] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:32.891 [2024-12-10 21:42:33.468315] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:32.891 spare 00:15:32.891 21:42:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.891 21:42:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:15:32.891 21:42:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.891 21:42:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:32.891 [2024-12-10 21:42:33.477717] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:32.891 [2024-12-10 21:42:33.479649] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:32.891 [2024-12-10 21:42:33.479722] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:32.891 [2024-12-10 21:42:33.479781] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:32.891 [2024-12-10 21:42:33.480002] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:32.891 [2024-12-10 21:42:33.480027] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:32.891 [2024-12-10 21:42:33.480273] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:15:32.891 [2024-12-10 21:42:33.480468] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:32.891 [2024-12-10 21:42:33.480480] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:32.891 [2024-12-10 21:42:33.480645] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:32.891 21:42:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.891 21:42:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:15:32.891 21:42:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:32.891 21:42:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:32.891 21:42:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:32.891 21:42:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:32.891 21:42:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:32.891 21:42:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:32.891 21:42:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:32.891 21:42:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:32.891 21:42:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:32.891 21:42:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:32.891 21:42:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:32.891 21:42:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.891 21:42:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:32.891 21:42:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.891 21:42:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:32.891 "name": "raid_bdev1", 00:15:32.891 "uuid": "070dda51-3fde-4cb0-b5a8-13732a7941b0", 00:15:32.891 "strip_size_kb": 0, 00:15:32.891 "state": "online", 00:15:32.891 "raid_level": "raid1", 00:15:32.891 "superblock": true, 00:15:32.891 "num_base_bdevs": 4, 00:15:32.891 "num_base_bdevs_discovered": 4, 00:15:32.891 "num_base_bdevs_operational": 4, 00:15:32.891 "base_bdevs_list": [ 00:15:32.891 { 00:15:32.891 "name": "BaseBdev1", 00:15:32.891 "uuid": "fe9715c3-5e49-5f7b-a3f7-7bbd7f4b69a2", 00:15:32.891 "is_configured": true, 00:15:32.891 "data_offset": 2048, 00:15:32.891 "data_size": 63488 00:15:32.891 }, 00:15:32.891 { 00:15:32.891 "name": "BaseBdev2", 00:15:32.891 "uuid": "14c84bd7-2d08-5501-9c19-7afe40b8b1d2", 00:15:32.891 "is_configured": true, 00:15:32.891 "data_offset": 2048, 00:15:32.891 "data_size": 63488 00:15:32.891 }, 00:15:32.891 { 00:15:32.891 "name": "BaseBdev3", 00:15:32.891 "uuid": "591fbcad-4988-5add-9097-0c597a926569", 00:15:32.891 "is_configured": true, 00:15:32.891 "data_offset": 2048, 00:15:32.891 "data_size": 63488 00:15:32.891 }, 00:15:32.891 { 00:15:32.891 "name": "BaseBdev4", 00:15:32.891 "uuid": "bc7644de-83ed-58f5-baa0-a3de543322b1", 00:15:32.891 "is_configured": true, 00:15:32.891 "data_offset": 2048, 00:15:32.891 "data_size": 63488 00:15:32.891 } 00:15:32.891 ] 00:15:32.891 }' 00:15:32.891 21:42:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:32.891 21:42:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:33.460 21:42:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:15:33.460 21:42:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:33.460 21:42:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.460 21:42:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:33.460 [2024-12-10 21:42:33.965288] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:33.460 21:42:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.460 21:42:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:15:33.460 21:42:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:33.460 21:42:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:15:33.460 21:42:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.460 21:42:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:33.460 21:42:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.460 21:42:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:15:33.460 21:42:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:15:33.460 21:42:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:15:33.460 21:42:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:15:33.460 21:42:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:15:33.460 21:42:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:33.460 21:42:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:15:33.460 21:42:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:33.460 21:42:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:33.460 21:42:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:33.460 21:42:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:15:33.460 21:42:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:33.460 21:42:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:33.460 21:42:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:15:33.719 [2024-12-10 21:42:34.256491] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:15:33.719 /dev/nbd0 00:15:33.719 21:42:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:33.719 21:42:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:33.719 21:42:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:33.719 21:42:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:15:33.719 21:42:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:33.719 21:42:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:33.719 21:42:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:33.719 21:42:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:15:33.719 21:42:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:33.719 21:42:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:33.719 21:42:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:33.719 1+0 records in 00:15:33.719 1+0 records out 00:15:33.719 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000403331 s, 10.2 MB/s 00:15:33.719 21:42:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:33.719 21:42:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:15:33.719 21:42:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:33.719 21:42:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:33.719 21:42:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:15:33.719 21:42:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:33.719 21:42:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:33.719 21:42:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:15:33.719 21:42:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:15:33.719 21:42:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:15:40.284 63488+0 records in 00:15:40.284 63488+0 records out 00:15:40.284 32505856 bytes (33 MB, 31 MiB) copied, 5.73274 s, 5.7 MB/s 00:15:40.284 21:42:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:40.284 21:42:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:40.284 21:42:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:40.284 21:42:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:40.284 21:42:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:15:40.284 21:42:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:40.284 21:42:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:40.284 [2024-12-10 21:42:40.266249] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:40.284 21:42:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:40.284 21:42:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:40.284 21:42:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:40.284 21:42:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:40.284 21:42:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:40.284 21:42:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:40.284 21:42:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:15:40.284 21:42:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:15:40.284 21:42:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:15:40.284 21:42:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.284 21:42:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:40.284 [2024-12-10 21:42:40.298295] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:40.284 21:42:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.284 21:42:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:40.284 21:42:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:40.284 21:42:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:40.284 21:42:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:40.284 21:42:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:40.284 21:42:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:40.284 21:42:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:40.284 21:42:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:40.284 21:42:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:40.284 21:42:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:40.284 21:42:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:40.284 21:42:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:40.284 21:42:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.284 21:42:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:40.284 21:42:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.284 21:42:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:40.284 "name": "raid_bdev1", 00:15:40.284 "uuid": "070dda51-3fde-4cb0-b5a8-13732a7941b0", 00:15:40.284 "strip_size_kb": 0, 00:15:40.284 "state": "online", 00:15:40.284 "raid_level": "raid1", 00:15:40.284 "superblock": true, 00:15:40.284 "num_base_bdevs": 4, 00:15:40.284 "num_base_bdevs_discovered": 3, 00:15:40.284 "num_base_bdevs_operational": 3, 00:15:40.284 "base_bdevs_list": [ 00:15:40.284 { 00:15:40.284 "name": null, 00:15:40.284 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:40.284 "is_configured": false, 00:15:40.284 "data_offset": 0, 00:15:40.284 "data_size": 63488 00:15:40.284 }, 00:15:40.284 { 00:15:40.284 "name": "BaseBdev2", 00:15:40.284 "uuid": "14c84bd7-2d08-5501-9c19-7afe40b8b1d2", 00:15:40.284 "is_configured": true, 00:15:40.284 "data_offset": 2048, 00:15:40.284 "data_size": 63488 00:15:40.284 }, 00:15:40.285 { 00:15:40.285 "name": "BaseBdev3", 00:15:40.285 "uuid": "591fbcad-4988-5add-9097-0c597a926569", 00:15:40.285 "is_configured": true, 00:15:40.285 "data_offset": 2048, 00:15:40.285 "data_size": 63488 00:15:40.285 }, 00:15:40.285 { 00:15:40.285 "name": "BaseBdev4", 00:15:40.285 "uuid": "bc7644de-83ed-58f5-baa0-a3de543322b1", 00:15:40.285 "is_configured": true, 00:15:40.285 "data_offset": 2048, 00:15:40.285 "data_size": 63488 00:15:40.285 } 00:15:40.285 ] 00:15:40.285 }' 00:15:40.285 21:42:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:40.285 21:42:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:40.285 21:42:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:40.285 21:42:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.285 21:42:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:40.285 [2024-12-10 21:42:40.741554] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:40.285 [2024-12-10 21:42:40.757035] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3500 00:15:40.285 21:42:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.285 21:42:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:15:40.285 [2024-12-10 21:42:40.758962] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:41.222 21:42:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:41.222 21:42:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:41.222 21:42:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:41.222 21:42:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:41.222 21:42:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:41.222 21:42:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:41.222 21:42:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:41.222 21:42:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.222 21:42:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.222 21:42:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.222 21:42:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:41.222 "name": "raid_bdev1", 00:15:41.222 "uuid": "070dda51-3fde-4cb0-b5a8-13732a7941b0", 00:15:41.222 "strip_size_kb": 0, 00:15:41.222 "state": "online", 00:15:41.222 "raid_level": "raid1", 00:15:41.222 "superblock": true, 00:15:41.222 "num_base_bdevs": 4, 00:15:41.222 "num_base_bdevs_discovered": 4, 00:15:41.222 "num_base_bdevs_operational": 4, 00:15:41.222 "process": { 00:15:41.222 "type": "rebuild", 00:15:41.222 "target": "spare", 00:15:41.222 "progress": { 00:15:41.222 "blocks": 20480, 00:15:41.222 "percent": 32 00:15:41.222 } 00:15:41.222 }, 00:15:41.222 "base_bdevs_list": [ 00:15:41.222 { 00:15:41.222 "name": "spare", 00:15:41.222 "uuid": "36a906cb-b747-5bb4-8a3c-ec6770a04186", 00:15:41.222 "is_configured": true, 00:15:41.222 "data_offset": 2048, 00:15:41.222 "data_size": 63488 00:15:41.222 }, 00:15:41.222 { 00:15:41.222 "name": "BaseBdev2", 00:15:41.222 "uuid": "14c84bd7-2d08-5501-9c19-7afe40b8b1d2", 00:15:41.222 "is_configured": true, 00:15:41.222 "data_offset": 2048, 00:15:41.222 "data_size": 63488 00:15:41.222 }, 00:15:41.222 { 00:15:41.222 "name": "BaseBdev3", 00:15:41.222 "uuid": "591fbcad-4988-5add-9097-0c597a926569", 00:15:41.222 "is_configured": true, 00:15:41.222 "data_offset": 2048, 00:15:41.222 "data_size": 63488 00:15:41.222 }, 00:15:41.222 { 00:15:41.222 "name": "BaseBdev4", 00:15:41.222 "uuid": "bc7644de-83ed-58f5-baa0-a3de543322b1", 00:15:41.222 "is_configured": true, 00:15:41.222 "data_offset": 2048, 00:15:41.222 "data_size": 63488 00:15:41.222 } 00:15:41.222 ] 00:15:41.222 }' 00:15:41.222 21:42:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:41.222 21:42:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:41.222 21:42:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:41.222 21:42:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:41.222 21:42:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:41.222 21:42:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.222 21:42:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.222 [2024-12-10 21:42:41.910463] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:41.222 [2024-12-10 21:42:41.964836] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:41.222 [2024-12-10 21:42:41.964998] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:41.222 [2024-12-10 21:42:41.965022] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:41.222 [2024-12-10 21:42:41.965045] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:41.222 21:42:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.222 21:42:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:41.222 21:42:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:41.222 21:42:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:41.222 21:42:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:41.222 21:42:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:41.222 21:42:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:41.222 21:42:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:41.222 21:42:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:41.222 21:42:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:41.222 21:42:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:41.222 21:42:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:41.222 21:42:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:41.222 21:42:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.222 21:42:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.481 21:42:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.481 21:42:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:41.481 "name": "raid_bdev1", 00:15:41.481 "uuid": "070dda51-3fde-4cb0-b5a8-13732a7941b0", 00:15:41.481 "strip_size_kb": 0, 00:15:41.481 "state": "online", 00:15:41.481 "raid_level": "raid1", 00:15:41.481 "superblock": true, 00:15:41.481 "num_base_bdevs": 4, 00:15:41.481 "num_base_bdevs_discovered": 3, 00:15:41.481 "num_base_bdevs_operational": 3, 00:15:41.481 "base_bdevs_list": [ 00:15:41.481 { 00:15:41.481 "name": null, 00:15:41.481 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:41.481 "is_configured": false, 00:15:41.481 "data_offset": 0, 00:15:41.481 "data_size": 63488 00:15:41.481 }, 00:15:41.481 { 00:15:41.481 "name": "BaseBdev2", 00:15:41.481 "uuid": "14c84bd7-2d08-5501-9c19-7afe40b8b1d2", 00:15:41.481 "is_configured": true, 00:15:41.481 "data_offset": 2048, 00:15:41.481 "data_size": 63488 00:15:41.481 }, 00:15:41.481 { 00:15:41.481 "name": "BaseBdev3", 00:15:41.481 "uuid": "591fbcad-4988-5add-9097-0c597a926569", 00:15:41.481 "is_configured": true, 00:15:41.481 "data_offset": 2048, 00:15:41.481 "data_size": 63488 00:15:41.481 }, 00:15:41.481 { 00:15:41.481 "name": "BaseBdev4", 00:15:41.481 "uuid": "bc7644de-83ed-58f5-baa0-a3de543322b1", 00:15:41.481 "is_configured": true, 00:15:41.481 "data_offset": 2048, 00:15:41.481 "data_size": 63488 00:15:41.481 } 00:15:41.481 ] 00:15:41.481 }' 00:15:41.481 21:42:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:41.481 21:42:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.741 21:42:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:41.741 21:42:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:41.741 21:42:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:41.741 21:42:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:41.741 21:42:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:41.741 21:42:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:41.741 21:42:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.741 21:42:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.741 21:42:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:41.741 21:42:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.000 21:42:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:42.000 "name": "raid_bdev1", 00:15:42.000 "uuid": "070dda51-3fde-4cb0-b5a8-13732a7941b0", 00:15:42.000 "strip_size_kb": 0, 00:15:42.000 "state": "online", 00:15:42.000 "raid_level": "raid1", 00:15:42.000 "superblock": true, 00:15:42.000 "num_base_bdevs": 4, 00:15:42.000 "num_base_bdevs_discovered": 3, 00:15:42.000 "num_base_bdevs_operational": 3, 00:15:42.000 "base_bdevs_list": [ 00:15:42.000 { 00:15:42.000 "name": null, 00:15:42.000 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:42.000 "is_configured": false, 00:15:42.000 "data_offset": 0, 00:15:42.000 "data_size": 63488 00:15:42.000 }, 00:15:42.000 { 00:15:42.000 "name": "BaseBdev2", 00:15:42.000 "uuid": "14c84bd7-2d08-5501-9c19-7afe40b8b1d2", 00:15:42.000 "is_configured": true, 00:15:42.000 "data_offset": 2048, 00:15:42.000 "data_size": 63488 00:15:42.000 }, 00:15:42.000 { 00:15:42.000 "name": "BaseBdev3", 00:15:42.000 "uuid": "591fbcad-4988-5add-9097-0c597a926569", 00:15:42.000 "is_configured": true, 00:15:42.000 "data_offset": 2048, 00:15:42.000 "data_size": 63488 00:15:42.000 }, 00:15:42.000 { 00:15:42.000 "name": "BaseBdev4", 00:15:42.000 "uuid": "bc7644de-83ed-58f5-baa0-a3de543322b1", 00:15:42.000 "is_configured": true, 00:15:42.000 "data_offset": 2048, 00:15:42.000 "data_size": 63488 00:15:42.000 } 00:15:42.000 ] 00:15:42.000 }' 00:15:42.000 21:42:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:42.000 21:42:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:42.000 21:42:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:42.000 21:42:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:42.000 21:42:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:42.000 21:42:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.000 21:42:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:42.000 [2024-12-10 21:42:42.647146] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:42.000 [2024-12-10 21:42:42.661891] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca35d0 00:15:42.000 21:42:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.000 21:42:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:15:42.000 [2024-12-10 21:42:42.663797] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:42.939 21:42:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:42.939 21:42:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:42.939 21:42:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:42.939 21:42:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:42.939 21:42:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:42.940 21:42:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:42.940 21:42:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:42.940 21:42:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.940 21:42:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:42.940 21:42:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.940 21:42:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:42.940 "name": "raid_bdev1", 00:15:42.940 "uuid": "070dda51-3fde-4cb0-b5a8-13732a7941b0", 00:15:42.940 "strip_size_kb": 0, 00:15:42.940 "state": "online", 00:15:42.940 "raid_level": "raid1", 00:15:42.940 "superblock": true, 00:15:42.940 "num_base_bdevs": 4, 00:15:42.940 "num_base_bdevs_discovered": 4, 00:15:42.940 "num_base_bdevs_operational": 4, 00:15:42.940 "process": { 00:15:42.940 "type": "rebuild", 00:15:42.940 "target": "spare", 00:15:42.940 "progress": { 00:15:42.940 "blocks": 20480, 00:15:42.940 "percent": 32 00:15:42.940 } 00:15:42.940 }, 00:15:42.940 "base_bdevs_list": [ 00:15:42.940 { 00:15:42.940 "name": "spare", 00:15:42.940 "uuid": "36a906cb-b747-5bb4-8a3c-ec6770a04186", 00:15:42.940 "is_configured": true, 00:15:42.940 "data_offset": 2048, 00:15:42.940 "data_size": 63488 00:15:42.940 }, 00:15:42.940 { 00:15:42.940 "name": "BaseBdev2", 00:15:42.940 "uuid": "14c84bd7-2d08-5501-9c19-7afe40b8b1d2", 00:15:42.940 "is_configured": true, 00:15:42.940 "data_offset": 2048, 00:15:42.940 "data_size": 63488 00:15:42.940 }, 00:15:42.940 { 00:15:42.940 "name": "BaseBdev3", 00:15:42.940 "uuid": "591fbcad-4988-5add-9097-0c597a926569", 00:15:42.940 "is_configured": true, 00:15:42.940 "data_offset": 2048, 00:15:42.940 "data_size": 63488 00:15:42.940 }, 00:15:42.940 { 00:15:42.940 "name": "BaseBdev4", 00:15:42.940 "uuid": "bc7644de-83ed-58f5-baa0-a3de543322b1", 00:15:42.940 "is_configured": true, 00:15:42.940 "data_offset": 2048, 00:15:42.940 "data_size": 63488 00:15:42.940 } 00:15:42.940 ] 00:15:42.940 }' 00:15:42.940 21:42:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:43.199 21:42:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:43.199 21:42:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:43.199 21:42:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:43.199 21:42:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:15:43.199 21:42:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:15:43.199 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:15:43.199 21:42:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:15:43.199 21:42:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:15:43.199 21:42:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:15:43.199 21:42:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:15:43.199 21:42:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.199 21:42:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.199 [2024-12-10 21:42:43.823242] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:43.199 [2024-12-10 21:42:43.969516] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000ca35d0 00:15:43.199 21:42:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.199 21:42:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:15:43.199 21:42:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:15:43.199 21:42:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:43.199 21:42:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:43.199 21:42:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:43.199 21:42:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:43.199 21:42:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:43.199 21:42:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:43.459 21:42:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:43.459 21:42:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.459 21:42:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.459 21:42:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.459 21:42:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:43.459 "name": "raid_bdev1", 00:15:43.459 "uuid": "070dda51-3fde-4cb0-b5a8-13732a7941b0", 00:15:43.459 "strip_size_kb": 0, 00:15:43.459 "state": "online", 00:15:43.459 "raid_level": "raid1", 00:15:43.459 "superblock": true, 00:15:43.459 "num_base_bdevs": 4, 00:15:43.459 "num_base_bdevs_discovered": 3, 00:15:43.459 "num_base_bdevs_operational": 3, 00:15:43.459 "process": { 00:15:43.459 "type": "rebuild", 00:15:43.459 "target": "spare", 00:15:43.459 "progress": { 00:15:43.459 "blocks": 24576, 00:15:43.459 "percent": 38 00:15:43.459 } 00:15:43.459 }, 00:15:43.459 "base_bdevs_list": [ 00:15:43.459 { 00:15:43.459 "name": "spare", 00:15:43.459 "uuid": "36a906cb-b747-5bb4-8a3c-ec6770a04186", 00:15:43.459 "is_configured": true, 00:15:43.459 "data_offset": 2048, 00:15:43.459 "data_size": 63488 00:15:43.459 }, 00:15:43.459 { 00:15:43.459 "name": null, 00:15:43.459 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:43.459 "is_configured": false, 00:15:43.459 "data_offset": 0, 00:15:43.459 "data_size": 63488 00:15:43.459 }, 00:15:43.459 { 00:15:43.459 "name": "BaseBdev3", 00:15:43.459 "uuid": "591fbcad-4988-5add-9097-0c597a926569", 00:15:43.459 "is_configured": true, 00:15:43.459 "data_offset": 2048, 00:15:43.459 "data_size": 63488 00:15:43.459 }, 00:15:43.459 { 00:15:43.459 "name": "BaseBdev4", 00:15:43.459 "uuid": "bc7644de-83ed-58f5-baa0-a3de543322b1", 00:15:43.459 "is_configured": true, 00:15:43.459 "data_offset": 2048, 00:15:43.459 "data_size": 63488 00:15:43.459 } 00:15:43.459 ] 00:15:43.459 }' 00:15:43.459 21:42:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:43.459 21:42:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:43.459 21:42:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:43.460 21:42:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:43.460 21:42:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=476 00:15:43.460 21:42:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:43.460 21:42:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:43.460 21:42:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:43.460 21:42:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:43.460 21:42:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:43.460 21:42:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:43.460 21:42:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:43.460 21:42:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.460 21:42:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.460 21:42:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:43.460 21:42:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.460 21:42:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:43.460 "name": "raid_bdev1", 00:15:43.460 "uuid": "070dda51-3fde-4cb0-b5a8-13732a7941b0", 00:15:43.460 "strip_size_kb": 0, 00:15:43.460 "state": "online", 00:15:43.460 "raid_level": "raid1", 00:15:43.460 "superblock": true, 00:15:43.460 "num_base_bdevs": 4, 00:15:43.460 "num_base_bdevs_discovered": 3, 00:15:43.460 "num_base_bdevs_operational": 3, 00:15:43.460 "process": { 00:15:43.460 "type": "rebuild", 00:15:43.460 "target": "spare", 00:15:43.460 "progress": { 00:15:43.460 "blocks": 26624, 00:15:43.460 "percent": 41 00:15:43.460 } 00:15:43.460 }, 00:15:43.460 "base_bdevs_list": [ 00:15:43.460 { 00:15:43.460 "name": "spare", 00:15:43.460 "uuid": "36a906cb-b747-5bb4-8a3c-ec6770a04186", 00:15:43.460 "is_configured": true, 00:15:43.460 "data_offset": 2048, 00:15:43.460 "data_size": 63488 00:15:43.460 }, 00:15:43.460 { 00:15:43.460 "name": null, 00:15:43.460 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:43.460 "is_configured": false, 00:15:43.460 "data_offset": 0, 00:15:43.460 "data_size": 63488 00:15:43.460 }, 00:15:43.460 { 00:15:43.460 "name": "BaseBdev3", 00:15:43.460 "uuid": "591fbcad-4988-5add-9097-0c597a926569", 00:15:43.460 "is_configured": true, 00:15:43.460 "data_offset": 2048, 00:15:43.460 "data_size": 63488 00:15:43.460 }, 00:15:43.460 { 00:15:43.460 "name": "BaseBdev4", 00:15:43.460 "uuid": "bc7644de-83ed-58f5-baa0-a3de543322b1", 00:15:43.460 "is_configured": true, 00:15:43.460 "data_offset": 2048, 00:15:43.460 "data_size": 63488 00:15:43.460 } 00:15:43.460 ] 00:15:43.460 }' 00:15:43.460 21:42:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:43.460 21:42:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:43.460 21:42:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:43.719 21:42:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:43.719 21:42:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:44.656 21:42:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:44.656 21:42:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:44.656 21:42:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:44.656 21:42:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:44.656 21:42:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:44.656 21:42:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:44.656 21:42:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:44.656 21:42:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:44.656 21:42:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.656 21:42:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:44.656 21:42:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.656 21:42:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:44.656 "name": "raid_bdev1", 00:15:44.656 "uuid": "070dda51-3fde-4cb0-b5a8-13732a7941b0", 00:15:44.656 "strip_size_kb": 0, 00:15:44.656 "state": "online", 00:15:44.656 "raid_level": "raid1", 00:15:44.656 "superblock": true, 00:15:44.656 "num_base_bdevs": 4, 00:15:44.656 "num_base_bdevs_discovered": 3, 00:15:44.656 "num_base_bdevs_operational": 3, 00:15:44.656 "process": { 00:15:44.656 "type": "rebuild", 00:15:44.656 "target": "spare", 00:15:44.656 "progress": { 00:15:44.656 "blocks": 51200, 00:15:44.656 "percent": 80 00:15:44.656 } 00:15:44.656 }, 00:15:44.656 "base_bdevs_list": [ 00:15:44.656 { 00:15:44.656 "name": "spare", 00:15:44.656 "uuid": "36a906cb-b747-5bb4-8a3c-ec6770a04186", 00:15:44.656 "is_configured": true, 00:15:44.656 "data_offset": 2048, 00:15:44.656 "data_size": 63488 00:15:44.656 }, 00:15:44.656 { 00:15:44.656 "name": null, 00:15:44.656 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:44.656 "is_configured": false, 00:15:44.656 "data_offset": 0, 00:15:44.656 "data_size": 63488 00:15:44.656 }, 00:15:44.656 { 00:15:44.656 "name": "BaseBdev3", 00:15:44.656 "uuid": "591fbcad-4988-5add-9097-0c597a926569", 00:15:44.656 "is_configured": true, 00:15:44.656 "data_offset": 2048, 00:15:44.656 "data_size": 63488 00:15:44.656 }, 00:15:44.656 { 00:15:44.656 "name": "BaseBdev4", 00:15:44.656 "uuid": "bc7644de-83ed-58f5-baa0-a3de543322b1", 00:15:44.656 "is_configured": true, 00:15:44.656 "data_offset": 2048, 00:15:44.656 "data_size": 63488 00:15:44.656 } 00:15:44.656 ] 00:15:44.656 }' 00:15:44.656 21:42:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:44.656 21:42:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:44.656 21:42:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:44.914 21:42:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:44.914 21:42:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:45.173 [2024-12-10 21:42:45.878911] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:15:45.173 [2024-12-10 21:42:45.878997] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:15:45.173 [2024-12-10 21:42:45.879142] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:45.742 21:42:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:45.742 21:42:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:45.742 21:42:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:45.742 21:42:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:45.742 21:42:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:45.742 21:42:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:45.742 21:42:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:45.742 21:42:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:45.742 21:42:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.742 21:42:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:45.742 21:42:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.742 21:42:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:45.742 "name": "raid_bdev1", 00:15:45.742 "uuid": "070dda51-3fde-4cb0-b5a8-13732a7941b0", 00:15:45.742 "strip_size_kb": 0, 00:15:45.742 "state": "online", 00:15:45.742 "raid_level": "raid1", 00:15:45.742 "superblock": true, 00:15:45.742 "num_base_bdevs": 4, 00:15:45.742 "num_base_bdevs_discovered": 3, 00:15:45.742 "num_base_bdevs_operational": 3, 00:15:45.742 "base_bdevs_list": [ 00:15:45.742 { 00:15:45.742 "name": "spare", 00:15:45.742 "uuid": "36a906cb-b747-5bb4-8a3c-ec6770a04186", 00:15:45.742 "is_configured": true, 00:15:45.742 "data_offset": 2048, 00:15:45.742 "data_size": 63488 00:15:45.742 }, 00:15:45.742 { 00:15:45.742 "name": null, 00:15:45.742 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:45.742 "is_configured": false, 00:15:45.742 "data_offset": 0, 00:15:45.742 "data_size": 63488 00:15:45.742 }, 00:15:45.742 { 00:15:45.742 "name": "BaseBdev3", 00:15:45.742 "uuid": "591fbcad-4988-5add-9097-0c597a926569", 00:15:45.742 "is_configured": true, 00:15:45.742 "data_offset": 2048, 00:15:45.742 "data_size": 63488 00:15:45.742 }, 00:15:45.742 { 00:15:45.742 "name": "BaseBdev4", 00:15:45.742 "uuid": "bc7644de-83ed-58f5-baa0-a3de543322b1", 00:15:45.742 "is_configured": true, 00:15:45.742 "data_offset": 2048, 00:15:45.742 "data_size": 63488 00:15:45.742 } 00:15:45.742 ] 00:15:45.742 }' 00:15:45.742 21:42:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:46.002 21:42:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:15:46.002 21:42:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:46.002 21:42:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:15:46.002 21:42:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:15:46.002 21:42:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:46.002 21:42:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:46.002 21:42:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:46.002 21:42:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:46.002 21:42:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:46.002 21:42:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:46.002 21:42:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.002 21:42:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:46.002 21:42:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:46.002 21:42:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.002 21:42:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:46.002 "name": "raid_bdev1", 00:15:46.002 "uuid": "070dda51-3fde-4cb0-b5a8-13732a7941b0", 00:15:46.002 "strip_size_kb": 0, 00:15:46.002 "state": "online", 00:15:46.002 "raid_level": "raid1", 00:15:46.002 "superblock": true, 00:15:46.002 "num_base_bdevs": 4, 00:15:46.002 "num_base_bdevs_discovered": 3, 00:15:46.002 "num_base_bdevs_operational": 3, 00:15:46.002 "base_bdevs_list": [ 00:15:46.002 { 00:15:46.002 "name": "spare", 00:15:46.002 "uuid": "36a906cb-b747-5bb4-8a3c-ec6770a04186", 00:15:46.002 "is_configured": true, 00:15:46.002 "data_offset": 2048, 00:15:46.002 "data_size": 63488 00:15:46.002 }, 00:15:46.002 { 00:15:46.002 "name": null, 00:15:46.002 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:46.002 "is_configured": false, 00:15:46.002 "data_offset": 0, 00:15:46.002 "data_size": 63488 00:15:46.002 }, 00:15:46.002 { 00:15:46.002 "name": "BaseBdev3", 00:15:46.002 "uuid": "591fbcad-4988-5add-9097-0c597a926569", 00:15:46.002 "is_configured": true, 00:15:46.002 "data_offset": 2048, 00:15:46.002 "data_size": 63488 00:15:46.002 }, 00:15:46.002 { 00:15:46.002 "name": "BaseBdev4", 00:15:46.002 "uuid": "bc7644de-83ed-58f5-baa0-a3de543322b1", 00:15:46.002 "is_configured": true, 00:15:46.002 "data_offset": 2048, 00:15:46.002 "data_size": 63488 00:15:46.002 } 00:15:46.002 ] 00:15:46.002 }' 00:15:46.002 21:42:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:46.002 21:42:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:46.002 21:42:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:46.002 21:42:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:46.002 21:42:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:46.002 21:42:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:46.002 21:42:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:46.002 21:42:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:46.002 21:42:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:46.002 21:42:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:46.002 21:42:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:46.002 21:42:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:46.002 21:42:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:46.002 21:42:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:46.002 21:42:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:46.002 21:42:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:46.002 21:42:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.002 21:42:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:46.002 21:42:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.002 21:42:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:46.002 "name": "raid_bdev1", 00:15:46.002 "uuid": "070dda51-3fde-4cb0-b5a8-13732a7941b0", 00:15:46.002 "strip_size_kb": 0, 00:15:46.002 "state": "online", 00:15:46.002 "raid_level": "raid1", 00:15:46.002 "superblock": true, 00:15:46.002 "num_base_bdevs": 4, 00:15:46.002 "num_base_bdevs_discovered": 3, 00:15:46.002 "num_base_bdevs_operational": 3, 00:15:46.002 "base_bdevs_list": [ 00:15:46.002 { 00:15:46.002 "name": "spare", 00:15:46.002 "uuid": "36a906cb-b747-5bb4-8a3c-ec6770a04186", 00:15:46.002 "is_configured": true, 00:15:46.002 "data_offset": 2048, 00:15:46.002 "data_size": 63488 00:15:46.002 }, 00:15:46.002 { 00:15:46.002 "name": null, 00:15:46.002 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:46.002 "is_configured": false, 00:15:46.002 "data_offset": 0, 00:15:46.002 "data_size": 63488 00:15:46.002 }, 00:15:46.002 { 00:15:46.002 "name": "BaseBdev3", 00:15:46.002 "uuid": "591fbcad-4988-5add-9097-0c597a926569", 00:15:46.002 "is_configured": true, 00:15:46.002 "data_offset": 2048, 00:15:46.002 "data_size": 63488 00:15:46.002 }, 00:15:46.002 { 00:15:46.002 "name": "BaseBdev4", 00:15:46.002 "uuid": "bc7644de-83ed-58f5-baa0-a3de543322b1", 00:15:46.002 "is_configured": true, 00:15:46.002 "data_offset": 2048, 00:15:46.002 "data_size": 63488 00:15:46.002 } 00:15:46.002 ] 00:15:46.002 }' 00:15:46.002 21:42:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:46.002 21:42:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:46.571 21:42:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:46.571 21:42:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.571 21:42:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:46.571 [2024-12-10 21:42:47.182908] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:46.571 [2024-12-10 21:42:47.182948] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:46.571 [2024-12-10 21:42:47.183047] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:46.571 [2024-12-10 21:42:47.183127] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:46.571 [2024-12-10 21:42:47.183154] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:46.571 21:42:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.571 21:42:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:46.571 21:42:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:15:46.571 21:42:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.571 21:42:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:46.571 21:42:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.571 21:42:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:15:46.571 21:42:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:15:46.571 21:42:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:15:46.571 21:42:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:15:46.571 21:42:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:46.571 21:42:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:15:46.571 21:42:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:46.571 21:42:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:46.571 21:42:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:46.571 21:42:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:15:46.571 21:42:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:46.571 21:42:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:46.571 21:42:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:15:46.837 /dev/nbd0 00:15:46.837 21:42:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:46.837 21:42:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:46.837 21:42:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:46.837 21:42:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:15:46.837 21:42:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:46.837 21:42:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:46.837 21:42:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:46.837 21:42:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:15:46.837 21:42:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:46.837 21:42:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:46.837 21:42:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:46.837 1+0 records in 00:15:46.837 1+0 records out 00:15:46.837 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000363688 s, 11.3 MB/s 00:15:46.837 21:42:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:46.837 21:42:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:15:46.838 21:42:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:46.838 21:42:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:46.838 21:42:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:15:46.838 21:42:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:46.838 21:42:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:46.838 21:42:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:15:47.098 /dev/nbd1 00:15:47.098 21:42:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:47.098 21:42:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:47.098 21:42:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:15:47.098 21:42:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:15:47.098 21:42:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:47.098 21:42:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:47.098 21:42:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:15:47.098 21:42:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:15:47.098 21:42:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:47.098 21:42:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:47.098 21:42:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:47.098 1+0 records in 00:15:47.098 1+0 records out 00:15:47.098 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000388288 s, 10.5 MB/s 00:15:47.098 21:42:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:47.098 21:42:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:15:47.098 21:42:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:47.098 21:42:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:47.098 21:42:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:15:47.098 21:42:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:47.098 21:42:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:47.098 21:42:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:15:47.386 21:42:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:15:47.386 21:42:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:47.386 21:42:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:47.386 21:42:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:47.386 21:42:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:15:47.386 21:42:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:47.386 21:42:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:47.647 21:42:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:47.647 21:42:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:47.647 21:42:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:47.647 21:42:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:47.647 21:42:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:47.647 21:42:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:47.647 21:42:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:15:47.647 21:42:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:15:47.647 21:42:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:47.647 21:42:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:47.906 21:42:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:47.906 21:42:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:47.906 21:42:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:47.906 21:42:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:47.906 21:42:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:47.906 21:42:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:47.906 21:42:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:15:47.906 21:42:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:15:47.906 21:42:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:15:47.906 21:42:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:15:47.906 21:42:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.906 21:42:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:47.906 21:42:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.906 21:42:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:47.906 21:42:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.906 21:42:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:47.906 [2024-12-10 21:42:48.457008] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:47.906 [2024-12-10 21:42:48.457085] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:47.906 [2024-12-10 21:42:48.457109] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:15:47.906 [2024-12-10 21:42:48.457118] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:47.906 [2024-12-10 21:42:48.459468] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:47.906 [2024-12-10 21:42:48.459505] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:47.906 [2024-12-10 21:42:48.459624] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:47.906 [2024-12-10 21:42:48.459679] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:47.906 [2024-12-10 21:42:48.459833] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:47.906 [2024-12-10 21:42:48.459973] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:47.906 spare 00:15:47.906 21:42:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.906 21:42:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:15:47.906 21:42:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.906 21:42:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:47.906 [2024-12-10 21:42:48.559889] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:15:47.906 [2024-12-10 21:42:48.559923] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:47.906 [2024-12-10 21:42:48.560260] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:15:47.906 [2024-12-10 21:42:48.560508] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:15:47.906 [2024-12-10 21:42:48.560527] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:15:47.906 [2024-12-10 21:42:48.560719] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:47.906 21:42:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.906 21:42:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:47.906 21:42:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:47.906 21:42:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:47.906 21:42:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:47.906 21:42:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:47.906 21:42:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:47.906 21:42:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:47.906 21:42:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:47.906 21:42:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:47.906 21:42:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:47.906 21:42:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:47.906 21:42:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:47.906 21:42:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.906 21:42:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:47.906 21:42:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.906 21:42:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:47.906 "name": "raid_bdev1", 00:15:47.906 "uuid": "070dda51-3fde-4cb0-b5a8-13732a7941b0", 00:15:47.906 "strip_size_kb": 0, 00:15:47.906 "state": "online", 00:15:47.906 "raid_level": "raid1", 00:15:47.906 "superblock": true, 00:15:47.906 "num_base_bdevs": 4, 00:15:47.906 "num_base_bdevs_discovered": 3, 00:15:47.906 "num_base_bdevs_operational": 3, 00:15:47.906 "base_bdevs_list": [ 00:15:47.906 { 00:15:47.906 "name": "spare", 00:15:47.906 "uuid": "36a906cb-b747-5bb4-8a3c-ec6770a04186", 00:15:47.906 "is_configured": true, 00:15:47.906 "data_offset": 2048, 00:15:47.906 "data_size": 63488 00:15:47.906 }, 00:15:47.906 { 00:15:47.906 "name": null, 00:15:47.906 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:47.906 "is_configured": false, 00:15:47.906 "data_offset": 2048, 00:15:47.906 "data_size": 63488 00:15:47.906 }, 00:15:47.906 { 00:15:47.906 "name": "BaseBdev3", 00:15:47.906 "uuid": "591fbcad-4988-5add-9097-0c597a926569", 00:15:47.906 "is_configured": true, 00:15:47.906 "data_offset": 2048, 00:15:47.906 "data_size": 63488 00:15:47.906 }, 00:15:47.906 { 00:15:47.906 "name": "BaseBdev4", 00:15:47.906 "uuid": "bc7644de-83ed-58f5-baa0-a3de543322b1", 00:15:47.906 "is_configured": true, 00:15:47.906 "data_offset": 2048, 00:15:47.906 "data_size": 63488 00:15:47.906 } 00:15:47.906 ] 00:15:47.906 }' 00:15:47.906 21:42:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:47.906 21:42:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.474 21:42:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:48.474 21:42:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:48.474 21:42:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:48.474 21:42:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:48.474 21:42:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:48.474 21:42:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:48.474 21:42:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.474 21:42:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:48.474 21:42:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.474 21:42:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.474 21:42:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:48.474 "name": "raid_bdev1", 00:15:48.474 "uuid": "070dda51-3fde-4cb0-b5a8-13732a7941b0", 00:15:48.474 "strip_size_kb": 0, 00:15:48.474 "state": "online", 00:15:48.474 "raid_level": "raid1", 00:15:48.474 "superblock": true, 00:15:48.474 "num_base_bdevs": 4, 00:15:48.474 "num_base_bdevs_discovered": 3, 00:15:48.474 "num_base_bdevs_operational": 3, 00:15:48.474 "base_bdevs_list": [ 00:15:48.474 { 00:15:48.474 "name": "spare", 00:15:48.474 "uuid": "36a906cb-b747-5bb4-8a3c-ec6770a04186", 00:15:48.474 "is_configured": true, 00:15:48.474 "data_offset": 2048, 00:15:48.474 "data_size": 63488 00:15:48.474 }, 00:15:48.474 { 00:15:48.474 "name": null, 00:15:48.474 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:48.474 "is_configured": false, 00:15:48.474 "data_offset": 2048, 00:15:48.474 "data_size": 63488 00:15:48.474 }, 00:15:48.474 { 00:15:48.474 "name": "BaseBdev3", 00:15:48.474 "uuid": "591fbcad-4988-5add-9097-0c597a926569", 00:15:48.474 "is_configured": true, 00:15:48.474 "data_offset": 2048, 00:15:48.474 "data_size": 63488 00:15:48.474 }, 00:15:48.474 { 00:15:48.474 "name": "BaseBdev4", 00:15:48.474 "uuid": "bc7644de-83ed-58f5-baa0-a3de543322b1", 00:15:48.474 "is_configured": true, 00:15:48.474 "data_offset": 2048, 00:15:48.474 "data_size": 63488 00:15:48.474 } 00:15:48.474 ] 00:15:48.474 }' 00:15:48.474 21:42:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:48.474 21:42:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:48.474 21:42:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:48.474 21:42:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:48.474 21:42:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:48.474 21:42:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.474 21:42:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.474 21:42:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:15:48.474 21:42:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.474 21:42:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:15:48.474 21:42:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:48.474 21:42:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.474 21:42:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.474 [2024-12-10 21:42:49.108026] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:48.474 21:42:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.474 21:42:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:48.474 21:42:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:48.474 21:42:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:48.474 21:42:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:48.474 21:42:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:48.474 21:42:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:48.474 21:42:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:48.474 21:42:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:48.474 21:42:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:48.474 21:42:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:48.474 21:42:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:48.474 21:42:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.474 21:42:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.474 21:42:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:48.474 21:42:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.474 21:42:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:48.474 "name": "raid_bdev1", 00:15:48.474 "uuid": "070dda51-3fde-4cb0-b5a8-13732a7941b0", 00:15:48.474 "strip_size_kb": 0, 00:15:48.474 "state": "online", 00:15:48.474 "raid_level": "raid1", 00:15:48.474 "superblock": true, 00:15:48.474 "num_base_bdevs": 4, 00:15:48.474 "num_base_bdevs_discovered": 2, 00:15:48.474 "num_base_bdevs_operational": 2, 00:15:48.475 "base_bdevs_list": [ 00:15:48.475 { 00:15:48.475 "name": null, 00:15:48.475 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:48.475 "is_configured": false, 00:15:48.475 "data_offset": 0, 00:15:48.475 "data_size": 63488 00:15:48.475 }, 00:15:48.475 { 00:15:48.475 "name": null, 00:15:48.475 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:48.475 "is_configured": false, 00:15:48.475 "data_offset": 2048, 00:15:48.475 "data_size": 63488 00:15:48.475 }, 00:15:48.475 { 00:15:48.475 "name": "BaseBdev3", 00:15:48.475 "uuid": "591fbcad-4988-5add-9097-0c597a926569", 00:15:48.475 "is_configured": true, 00:15:48.475 "data_offset": 2048, 00:15:48.475 "data_size": 63488 00:15:48.475 }, 00:15:48.475 { 00:15:48.475 "name": "BaseBdev4", 00:15:48.475 "uuid": "bc7644de-83ed-58f5-baa0-a3de543322b1", 00:15:48.475 "is_configured": true, 00:15:48.475 "data_offset": 2048, 00:15:48.475 "data_size": 63488 00:15:48.475 } 00:15:48.475 ] 00:15:48.475 }' 00:15:48.475 21:42:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:48.475 21:42:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:49.049 21:42:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:49.050 21:42:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.050 21:42:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:49.050 [2024-12-10 21:42:49.543387] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:49.050 [2024-12-10 21:42:49.543626] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:15:49.050 [2024-12-10 21:42:49.543651] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:49.050 [2024-12-10 21:42:49.543692] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:49.050 [2024-12-10 21:42:49.559824] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1d50 00:15:49.050 21:42:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.050 21:42:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:15:49.050 [2024-12-10 21:42:49.561808] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:49.993 21:42:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:49.994 21:42:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:49.994 21:42:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:49.994 21:42:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:49.994 21:42:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:49.994 21:42:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:49.994 21:42:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:49.994 21:42:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.994 21:42:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:49.994 21:42:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.994 21:42:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:49.994 "name": "raid_bdev1", 00:15:49.994 "uuid": "070dda51-3fde-4cb0-b5a8-13732a7941b0", 00:15:49.994 "strip_size_kb": 0, 00:15:49.994 "state": "online", 00:15:49.994 "raid_level": "raid1", 00:15:49.994 "superblock": true, 00:15:49.994 "num_base_bdevs": 4, 00:15:49.994 "num_base_bdevs_discovered": 3, 00:15:49.994 "num_base_bdevs_operational": 3, 00:15:49.994 "process": { 00:15:49.994 "type": "rebuild", 00:15:49.994 "target": "spare", 00:15:49.994 "progress": { 00:15:49.994 "blocks": 20480, 00:15:49.994 "percent": 32 00:15:49.994 } 00:15:49.994 }, 00:15:49.994 "base_bdevs_list": [ 00:15:49.994 { 00:15:49.994 "name": "spare", 00:15:49.994 "uuid": "36a906cb-b747-5bb4-8a3c-ec6770a04186", 00:15:49.994 "is_configured": true, 00:15:49.994 "data_offset": 2048, 00:15:49.994 "data_size": 63488 00:15:49.994 }, 00:15:49.994 { 00:15:49.994 "name": null, 00:15:49.994 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:49.994 "is_configured": false, 00:15:49.994 "data_offset": 2048, 00:15:49.994 "data_size": 63488 00:15:49.994 }, 00:15:49.994 { 00:15:49.994 "name": "BaseBdev3", 00:15:49.994 "uuid": "591fbcad-4988-5add-9097-0c597a926569", 00:15:49.994 "is_configured": true, 00:15:49.994 "data_offset": 2048, 00:15:49.994 "data_size": 63488 00:15:49.994 }, 00:15:49.994 { 00:15:49.994 "name": "BaseBdev4", 00:15:49.994 "uuid": "bc7644de-83ed-58f5-baa0-a3de543322b1", 00:15:49.994 "is_configured": true, 00:15:49.994 "data_offset": 2048, 00:15:49.994 "data_size": 63488 00:15:49.994 } 00:15:49.994 ] 00:15:49.994 }' 00:15:49.994 21:42:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:49.994 21:42:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:49.994 21:42:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:49.994 21:42:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:49.994 21:42:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:15:49.994 21:42:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.994 21:42:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:49.994 [2024-12-10 21:42:50.697031] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:49.994 [2024-12-10 21:42:50.767190] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:49.994 [2024-12-10 21:42:50.767254] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:49.994 [2024-12-10 21:42:50.767289] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:49.994 [2024-12-10 21:42:50.767296] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:50.253 21:42:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.253 21:42:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:50.253 21:42:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:50.253 21:42:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:50.253 21:42:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:50.253 21:42:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:50.253 21:42:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:50.253 21:42:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:50.253 21:42:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:50.253 21:42:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:50.253 21:42:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:50.253 21:42:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:50.253 21:42:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:50.253 21:42:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.253 21:42:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:50.253 21:42:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.253 21:42:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:50.253 "name": "raid_bdev1", 00:15:50.253 "uuid": "070dda51-3fde-4cb0-b5a8-13732a7941b0", 00:15:50.253 "strip_size_kb": 0, 00:15:50.253 "state": "online", 00:15:50.253 "raid_level": "raid1", 00:15:50.253 "superblock": true, 00:15:50.253 "num_base_bdevs": 4, 00:15:50.253 "num_base_bdevs_discovered": 2, 00:15:50.253 "num_base_bdevs_operational": 2, 00:15:50.253 "base_bdevs_list": [ 00:15:50.253 { 00:15:50.253 "name": null, 00:15:50.253 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:50.253 "is_configured": false, 00:15:50.253 "data_offset": 0, 00:15:50.253 "data_size": 63488 00:15:50.253 }, 00:15:50.253 { 00:15:50.253 "name": null, 00:15:50.253 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:50.253 "is_configured": false, 00:15:50.253 "data_offset": 2048, 00:15:50.253 "data_size": 63488 00:15:50.253 }, 00:15:50.253 { 00:15:50.253 "name": "BaseBdev3", 00:15:50.253 "uuid": "591fbcad-4988-5add-9097-0c597a926569", 00:15:50.253 "is_configured": true, 00:15:50.253 "data_offset": 2048, 00:15:50.253 "data_size": 63488 00:15:50.253 }, 00:15:50.253 { 00:15:50.253 "name": "BaseBdev4", 00:15:50.253 "uuid": "bc7644de-83ed-58f5-baa0-a3de543322b1", 00:15:50.253 "is_configured": true, 00:15:50.253 "data_offset": 2048, 00:15:50.253 "data_size": 63488 00:15:50.253 } 00:15:50.253 ] 00:15:50.253 }' 00:15:50.253 21:42:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:50.253 21:42:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:50.513 21:42:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:50.513 21:42:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.513 21:42:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:50.513 [2024-12-10 21:42:51.160524] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:50.513 [2024-12-10 21:42:51.160600] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:50.513 [2024-12-10 21:42:51.160634] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:15:50.513 [2024-12-10 21:42:51.160644] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:50.513 [2024-12-10 21:42:51.161160] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:50.513 [2024-12-10 21:42:51.161194] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:50.513 [2024-12-10 21:42:51.161304] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:50.513 [2024-12-10 21:42:51.161322] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:15:50.513 [2024-12-10 21:42:51.161338] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:50.513 [2024-12-10 21:42:51.161367] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:50.513 [2024-12-10 21:42:51.177741] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1e20 00:15:50.513 spare 00:15:50.513 21:42:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.513 21:42:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:15:50.513 [2024-12-10 21:42:51.179780] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:51.452 21:42:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:51.452 21:42:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:51.452 21:42:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:51.452 21:42:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:51.452 21:42:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:51.452 21:42:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:51.452 21:42:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:51.452 21:42:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.452 21:42:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.452 21:42:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.712 21:42:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:51.712 "name": "raid_bdev1", 00:15:51.712 "uuid": "070dda51-3fde-4cb0-b5a8-13732a7941b0", 00:15:51.712 "strip_size_kb": 0, 00:15:51.712 "state": "online", 00:15:51.712 "raid_level": "raid1", 00:15:51.712 "superblock": true, 00:15:51.712 "num_base_bdevs": 4, 00:15:51.712 "num_base_bdevs_discovered": 3, 00:15:51.712 "num_base_bdevs_operational": 3, 00:15:51.712 "process": { 00:15:51.712 "type": "rebuild", 00:15:51.712 "target": "spare", 00:15:51.712 "progress": { 00:15:51.712 "blocks": 20480, 00:15:51.712 "percent": 32 00:15:51.712 } 00:15:51.712 }, 00:15:51.712 "base_bdevs_list": [ 00:15:51.712 { 00:15:51.712 "name": "spare", 00:15:51.712 "uuid": "36a906cb-b747-5bb4-8a3c-ec6770a04186", 00:15:51.712 "is_configured": true, 00:15:51.712 "data_offset": 2048, 00:15:51.712 "data_size": 63488 00:15:51.712 }, 00:15:51.712 { 00:15:51.712 "name": null, 00:15:51.712 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:51.712 "is_configured": false, 00:15:51.712 "data_offset": 2048, 00:15:51.712 "data_size": 63488 00:15:51.712 }, 00:15:51.712 { 00:15:51.712 "name": "BaseBdev3", 00:15:51.712 "uuid": "591fbcad-4988-5add-9097-0c597a926569", 00:15:51.712 "is_configured": true, 00:15:51.712 "data_offset": 2048, 00:15:51.712 "data_size": 63488 00:15:51.712 }, 00:15:51.712 { 00:15:51.712 "name": "BaseBdev4", 00:15:51.712 "uuid": "bc7644de-83ed-58f5-baa0-a3de543322b1", 00:15:51.712 "is_configured": true, 00:15:51.712 "data_offset": 2048, 00:15:51.712 "data_size": 63488 00:15:51.712 } 00:15:51.712 ] 00:15:51.712 }' 00:15:51.712 21:42:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:51.712 21:42:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:51.712 21:42:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:51.712 21:42:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:51.712 21:42:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:15:51.712 21:42:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.712 21:42:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.712 [2024-12-10 21:42:52.335303] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:51.712 [2024-12-10 21:42:52.385527] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:51.712 [2024-12-10 21:42:52.385637] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:51.712 [2024-12-10 21:42:52.385656] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:51.712 [2024-12-10 21:42:52.385668] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:51.712 21:42:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.712 21:42:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:51.712 21:42:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:51.712 21:42:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:51.712 21:42:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:51.712 21:42:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:51.712 21:42:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:51.712 21:42:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:51.712 21:42:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:51.712 21:42:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:51.712 21:42:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:51.712 21:42:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:51.712 21:42:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:51.712 21:42:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.712 21:42:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.712 21:42:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.712 21:42:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:51.712 "name": "raid_bdev1", 00:15:51.712 "uuid": "070dda51-3fde-4cb0-b5a8-13732a7941b0", 00:15:51.712 "strip_size_kb": 0, 00:15:51.712 "state": "online", 00:15:51.712 "raid_level": "raid1", 00:15:51.712 "superblock": true, 00:15:51.712 "num_base_bdevs": 4, 00:15:51.712 "num_base_bdevs_discovered": 2, 00:15:51.712 "num_base_bdevs_operational": 2, 00:15:51.712 "base_bdevs_list": [ 00:15:51.712 { 00:15:51.712 "name": null, 00:15:51.712 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:51.712 "is_configured": false, 00:15:51.712 "data_offset": 0, 00:15:51.712 "data_size": 63488 00:15:51.712 }, 00:15:51.712 { 00:15:51.712 "name": null, 00:15:51.712 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:51.712 "is_configured": false, 00:15:51.712 "data_offset": 2048, 00:15:51.712 "data_size": 63488 00:15:51.712 }, 00:15:51.712 { 00:15:51.712 "name": "BaseBdev3", 00:15:51.712 "uuid": "591fbcad-4988-5add-9097-0c597a926569", 00:15:51.712 "is_configured": true, 00:15:51.712 "data_offset": 2048, 00:15:51.712 "data_size": 63488 00:15:51.712 }, 00:15:51.712 { 00:15:51.712 "name": "BaseBdev4", 00:15:51.712 "uuid": "bc7644de-83ed-58f5-baa0-a3de543322b1", 00:15:51.712 "is_configured": true, 00:15:51.712 "data_offset": 2048, 00:15:51.712 "data_size": 63488 00:15:51.712 } 00:15:51.712 ] 00:15:51.712 }' 00:15:51.712 21:42:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:51.712 21:42:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:52.281 21:42:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:52.281 21:42:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:52.281 21:42:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:52.281 21:42:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:52.281 21:42:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:52.281 21:42:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:52.281 21:42:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:52.281 21:42:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.281 21:42:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:52.281 21:42:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.281 21:42:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:52.281 "name": "raid_bdev1", 00:15:52.281 "uuid": "070dda51-3fde-4cb0-b5a8-13732a7941b0", 00:15:52.281 "strip_size_kb": 0, 00:15:52.281 "state": "online", 00:15:52.281 "raid_level": "raid1", 00:15:52.281 "superblock": true, 00:15:52.281 "num_base_bdevs": 4, 00:15:52.281 "num_base_bdevs_discovered": 2, 00:15:52.281 "num_base_bdevs_operational": 2, 00:15:52.281 "base_bdevs_list": [ 00:15:52.281 { 00:15:52.281 "name": null, 00:15:52.281 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:52.281 "is_configured": false, 00:15:52.281 "data_offset": 0, 00:15:52.281 "data_size": 63488 00:15:52.281 }, 00:15:52.281 { 00:15:52.281 "name": null, 00:15:52.281 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:52.281 "is_configured": false, 00:15:52.281 "data_offset": 2048, 00:15:52.281 "data_size": 63488 00:15:52.281 }, 00:15:52.281 { 00:15:52.281 "name": "BaseBdev3", 00:15:52.281 "uuid": "591fbcad-4988-5add-9097-0c597a926569", 00:15:52.281 "is_configured": true, 00:15:52.281 "data_offset": 2048, 00:15:52.281 "data_size": 63488 00:15:52.281 }, 00:15:52.281 { 00:15:52.281 "name": "BaseBdev4", 00:15:52.281 "uuid": "bc7644de-83ed-58f5-baa0-a3de543322b1", 00:15:52.281 "is_configured": true, 00:15:52.281 "data_offset": 2048, 00:15:52.281 "data_size": 63488 00:15:52.281 } 00:15:52.281 ] 00:15:52.281 }' 00:15:52.281 21:42:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:52.281 21:42:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:52.281 21:42:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:52.281 21:42:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:52.281 21:42:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:15:52.281 21:42:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.281 21:42:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:52.281 21:42:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.281 21:42:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:52.281 21:42:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.281 21:42:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:52.281 [2024-12-10 21:42:53.000054] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:52.281 [2024-12-10 21:42:53.000135] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:52.281 [2024-12-10 21:42:53.000159] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:15:52.281 [2024-12-10 21:42:53.000171] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:52.281 [2024-12-10 21:42:53.000699] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:52.281 [2024-12-10 21:42:53.000732] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:52.281 [2024-12-10 21:42:53.000827] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:15:52.281 [2024-12-10 21:42:53.000849] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:15:52.281 [2024-12-10 21:42:53.000858] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:52.281 [2024-12-10 21:42:53.000884] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:15:52.281 BaseBdev1 00:15:52.281 21:42:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.281 21:42:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:15:53.658 21:42:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:53.658 21:42:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:53.658 21:42:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:53.658 21:42:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:53.658 21:42:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:53.658 21:42:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:53.658 21:42:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:53.658 21:42:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:53.658 21:42:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:53.658 21:42:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:53.658 21:42:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:53.658 21:42:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.658 21:42:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:53.658 21:42:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:53.658 21:42:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.658 21:42:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:53.658 "name": "raid_bdev1", 00:15:53.658 "uuid": "070dda51-3fde-4cb0-b5a8-13732a7941b0", 00:15:53.658 "strip_size_kb": 0, 00:15:53.658 "state": "online", 00:15:53.658 "raid_level": "raid1", 00:15:53.658 "superblock": true, 00:15:53.658 "num_base_bdevs": 4, 00:15:53.658 "num_base_bdevs_discovered": 2, 00:15:53.658 "num_base_bdevs_operational": 2, 00:15:53.658 "base_bdevs_list": [ 00:15:53.658 { 00:15:53.658 "name": null, 00:15:53.658 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:53.658 "is_configured": false, 00:15:53.658 "data_offset": 0, 00:15:53.658 "data_size": 63488 00:15:53.658 }, 00:15:53.658 { 00:15:53.658 "name": null, 00:15:53.658 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:53.658 "is_configured": false, 00:15:53.659 "data_offset": 2048, 00:15:53.659 "data_size": 63488 00:15:53.659 }, 00:15:53.659 { 00:15:53.659 "name": "BaseBdev3", 00:15:53.659 "uuid": "591fbcad-4988-5add-9097-0c597a926569", 00:15:53.659 "is_configured": true, 00:15:53.659 "data_offset": 2048, 00:15:53.659 "data_size": 63488 00:15:53.659 }, 00:15:53.659 { 00:15:53.659 "name": "BaseBdev4", 00:15:53.659 "uuid": "bc7644de-83ed-58f5-baa0-a3de543322b1", 00:15:53.659 "is_configured": true, 00:15:53.659 "data_offset": 2048, 00:15:53.659 "data_size": 63488 00:15:53.659 } 00:15:53.659 ] 00:15:53.659 }' 00:15:53.659 21:42:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:53.659 21:42:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:53.659 21:42:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:53.659 21:42:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:53.659 21:42:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:53.659 21:42:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:53.659 21:42:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:53.659 21:42:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:53.659 21:42:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.659 21:42:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:53.659 21:42:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:53.918 21:42:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.918 21:42:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:53.918 "name": "raid_bdev1", 00:15:53.918 "uuid": "070dda51-3fde-4cb0-b5a8-13732a7941b0", 00:15:53.918 "strip_size_kb": 0, 00:15:53.918 "state": "online", 00:15:53.918 "raid_level": "raid1", 00:15:53.918 "superblock": true, 00:15:53.918 "num_base_bdevs": 4, 00:15:53.918 "num_base_bdevs_discovered": 2, 00:15:53.918 "num_base_bdevs_operational": 2, 00:15:53.918 "base_bdevs_list": [ 00:15:53.918 { 00:15:53.918 "name": null, 00:15:53.918 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:53.918 "is_configured": false, 00:15:53.918 "data_offset": 0, 00:15:53.918 "data_size": 63488 00:15:53.918 }, 00:15:53.918 { 00:15:53.918 "name": null, 00:15:53.918 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:53.918 "is_configured": false, 00:15:53.918 "data_offset": 2048, 00:15:53.918 "data_size": 63488 00:15:53.918 }, 00:15:53.918 { 00:15:53.918 "name": "BaseBdev3", 00:15:53.918 "uuid": "591fbcad-4988-5add-9097-0c597a926569", 00:15:53.918 "is_configured": true, 00:15:53.918 "data_offset": 2048, 00:15:53.918 "data_size": 63488 00:15:53.918 }, 00:15:53.918 { 00:15:53.918 "name": "BaseBdev4", 00:15:53.918 "uuid": "bc7644de-83ed-58f5-baa0-a3de543322b1", 00:15:53.918 "is_configured": true, 00:15:53.918 "data_offset": 2048, 00:15:53.918 "data_size": 63488 00:15:53.918 } 00:15:53.918 ] 00:15:53.918 }' 00:15:53.918 21:42:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:53.918 21:42:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:53.918 21:42:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:53.918 21:42:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:53.918 21:42:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:53.918 21:42:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:15:53.918 21:42:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:53.918 21:42:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:15:53.918 21:42:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:53.918 21:42:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:15:53.918 21:42:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:53.918 21:42:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:53.918 21:42:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.918 21:42:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:53.918 [2024-12-10 21:42:54.557442] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:53.918 [2024-12-10 21:42:54.557665] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:15:53.918 [2024-12-10 21:42:54.557682] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:53.918 request: 00:15:53.918 { 00:15:53.918 "base_bdev": "BaseBdev1", 00:15:53.918 "raid_bdev": "raid_bdev1", 00:15:53.918 "method": "bdev_raid_add_base_bdev", 00:15:53.918 "req_id": 1 00:15:53.918 } 00:15:53.918 Got JSON-RPC error response 00:15:53.918 response: 00:15:53.918 { 00:15:53.918 "code": -22, 00:15:53.918 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:15:53.918 } 00:15:53.918 21:42:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:15:53.918 21:42:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:15:53.918 21:42:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:53.918 21:42:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:53.918 21:42:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:53.918 21:42:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:15:54.862 21:42:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:54.862 21:42:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:54.862 21:42:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:54.862 21:42:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:54.862 21:42:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:54.862 21:42:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:54.862 21:42:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:54.862 21:42:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:54.862 21:42:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:54.862 21:42:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:54.862 21:42:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:54.862 21:42:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:54.862 21:42:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.862 21:42:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:54.862 21:42:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.862 21:42:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:54.862 "name": "raid_bdev1", 00:15:54.862 "uuid": "070dda51-3fde-4cb0-b5a8-13732a7941b0", 00:15:54.862 "strip_size_kb": 0, 00:15:54.862 "state": "online", 00:15:54.862 "raid_level": "raid1", 00:15:54.862 "superblock": true, 00:15:54.862 "num_base_bdevs": 4, 00:15:54.862 "num_base_bdevs_discovered": 2, 00:15:54.862 "num_base_bdevs_operational": 2, 00:15:54.862 "base_bdevs_list": [ 00:15:54.862 { 00:15:54.862 "name": null, 00:15:54.862 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:54.862 "is_configured": false, 00:15:54.862 "data_offset": 0, 00:15:54.862 "data_size": 63488 00:15:54.862 }, 00:15:54.862 { 00:15:54.862 "name": null, 00:15:54.862 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:54.862 "is_configured": false, 00:15:54.862 "data_offset": 2048, 00:15:54.862 "data_size": 63488 00:15:54.862 }, 00:15:54.862 { 00:15:54.862 "name": "BaseBdev3", 00:15:54.862 "uuid": "591fbcad-4988-5add-9097-0c597a926569", 00:15:54.862 "is_configured": true, 00:15:54.862 "data_offset": 2048, 00:15:54.862 "data_size": 63488 00:15:54.862 }, 00:15:54.862 { 00:15:54.862 "name": "BaseBdev4", 00:15:54.862 "uuid": "bc7644de-83ed-58f5-baa0-a3de543322b1", 00:15:54.862 "is_configured": true, 00:15:54.862 "data_offset": 2048, 00:15:54.862 "data_size": 63488 00:15:54.862 } 00:15:54.862 ] 00:15:54.862 }' 00:15:54.862 21:42:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:54.862 21:42:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:55.438 21:42:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:55.438 21:42:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:55.438 21:42:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:55.438 21:42:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:55.438 21:42:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:55.438 21:42:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:55.438 21:42:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.438 21:42:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:55.438 21:42:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:55.438 21:42:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.438 21:42:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:55.438 "name": "raid_bdev1", 00:15:55.438 "uuid": "070dda51-3fde-4cb0-b5a8-13732a7941b0", 00:15:55.438 "strip_size_kb": 0, 00:15:55.438 "state": "online", 00:15:55.438 "raid_level": "raid1", 00:15:55.438 "superblock": true, 00:15:55.438 "num_base_bdevs": 4, 00:15:55.438 "num_base_bdevs_discovered": 2, 00:15:55.438 "num_base_bdevs_operational": 2, 00:15:55.438 "base_bdevs_list": [ 00:15:55.438 { 00:15:55.438 "name": null, 00:15:55.438 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:55.438 "is_configured": false, 00:15:55.438 "data_offset": 0, 00:15:55.438 "data_size": 63488 00:15:55.438 }, 00:15:55.438 { 00:15:55.438 "name": null, 00:15:55.438 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:55.438 "is_configured": false, 00:15:55.438 "data_offset": 2048, 00:15:55.438 "data_size": 63488 00:15:55.438 }, 00:15:55.438 { 00:15:55.438 "name": "BaseBdev3", 00:15:55.438 "uuid": "591fbcad-4988-5add-9097-0c597a926569", 00:15:55.438 "is_configured": true, 00:15:55.438 "data_offset": 2048, 00:15:55.438 "data_size": 63488 00:15:55.438 }, 00:15:55.438 { 00:15:55.438 "name": "BaseBdev4", 00:15:55.438 "uuid": "bc7644de-83ed-58f5-baa0-a3de543322b1", 00:15:55.438 "is_configured": true, 00:15:55.438 "data_offset": 2048, 00:15:55.438 "data_size": 63488 00:15:55.438 } 00:15:55.438 ] 00:15:55.438 }' 00:15:55.438 21:42:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:55.438 21:42:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:55.438 21:42:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:55.438 21:42:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:55.438 21:42:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 78160 00:15:55.438 21:42:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 78160 ']' 00:15:55.438 21:42:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 78160 00:15:55.438 21:42:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:15:55.438 21:42:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:55.439 21:42:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78160 00:15:55.439 21:42:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:55.439 21:42:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:55.439 killing process with pid 78160 00:15:55.439 21:42:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78160' 00:15:55.439 21:42:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 78160 00:15:55.439 Received shutdown signal, test time was about 60.000000 seconds 00:15:55.439 00:15:55.439 Latency(us) 00:15:55.439 [2024-12-10T21:42:56.222Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:55.439 [2024-12-10T21:42:56.222Z] =================================================================================================================== 00:15:55.439 [2024-12-10T21:42:56.222Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:55.439 [2024-12-10 21:42:56.157736] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:55.439 21:42:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 78160 00:15:55.439 [2024-12-10 21:42:56.157866] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:55.439 [2024-12-10 21:42:56.157950] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:55.439 [2024-12-10 21:42:56.157962] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:15:56.005 [2024-12-10 21:42:56.702315] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:57.381 21:42:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:15:57.381 00:15:57.381 real 0m25.692s 00:15:57.381 user 0m30.913s 00:15:57.381 sys 0m3.740s 00:15:57.381 21:42:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:57.381 21:42:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:57.381 ************************************ 00:15:57.381 END TEST raid_rebuild_test_sb 00:15:57.381 ************************************ 00:15:57.381 21:42:57 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 4 false true true 00:15:57.381 21:42:57 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:15:57.381 21:42:57 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:57.381 21:42:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:57.381 ************************************ 00:15:57.381 START TEST raid_rebuild_test_io 00:15:57.381 ************************************ 00:15:57.381 21:42:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 false true true 00:15:57.381 21:42:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:15:57.381 21:42:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:15:57.381 21:42:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:15:57.381 21:42:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:15:57.381 21:42:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:15:57.381 21:42:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:15:57.381 21:42:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:57.381 21:42:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:15:57.381 21:42:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:57.381 21:42:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:57.381 21:42:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:15:57.381 21:42:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:57.381 21:42:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:57.381 21:42:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:15:57.381 21:42:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:57.381 21:42:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:57.381 21:42:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:15:57.381 21:42:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:57.381 21:42:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:57.381 21:42:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:57.381 21:42:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:15:57.381 21:42:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:15:57.381 21:42:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:15:57.381 21:42:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:15:57.381 21:42:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:15:57.381 21:42:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:15:57.381 21:42:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:15:57.381 21:42:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:15:57.381 21:42:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:15:57.381 21:42:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=78919 00:15:57.381 21:42:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:15:57.381 21:42:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 78919 00:15:57.381 21:42:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # '[' -z 78919 ']' 00:15:57.381 21:42:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:57.381 21:42:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:57.381 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:57.381 21:42:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:57.381 21:42:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:57.381 21:42:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:57.381 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:57.381 Zero copy mechanism will not be used. 00:15:57.381 [2024-12-10 21:42:58.064885] Starting SPDK v25.01-pre git sha1 cec5ba284 / DPDK 24.03.0 initialization... 00:15:57.381 [2024-12-10 21:42:58.065006] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78919 ] 00:15:57.638 [2024-12-10 21:42:58.237705] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:57.639 [2024-12-10 21:42:58.360526] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:15:57.896 [2024-12-10 21:42:58.565682] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:57.896 [2024-12-10 21:42:58.565749] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:58.154 21:42:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:58.154 21:42:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # return 0 00:15:58.154 21:42:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:58.154 21:42:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:58.154 21:42:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.154 21:42:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:58.412 BaseBdev1_malloc 00:15:58.412 21:42:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.412 21:42:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:58.412 21:42:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.412 21:42:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:58.413 [2024-12-10 21:42:58.954879] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:58.413 [2024-12-10 21:42:58.954960] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:58.413 [2024-12-10 21:42:58.954981] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:58.413 [2024-12-10 21:42:58.954993] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:58.413 [2024-12-10 21:42:58.957262] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:58.413 [2024-12-10 21:42:58.957302] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:58.413 BaseBdev1 00:15:58.413 21:42:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.413 21:42:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:58.413 21:42:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:58.413 21:42:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.413 21:42:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:58.413 BaseBdev2_malloc 00:15:58.413 21:42:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.413 21:42:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:15:58.413 21:42:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.413 21:42:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:58.413 [2024-12-10 21:42:59.009437] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:15:58.413 [2024-12-10 21:42:59.009500] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:58.413 [2024-12-10 21:42:59.009518] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:58.413 [2024-12-10 21:42:59.009531] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:58.413 [2024-12-10 21:42:59.011717] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:58.413 [2024-12-10 21:42:59.011755] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:58.413 BaseBdev2 00:15:58.413 21:42:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.413 21:42:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:58.413 21:42:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:58.413 21:42:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.413 21:42:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:58.413 BaseBdev3_malloc 00:15:58.413 21:42:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.413 21:42:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:15:58.413 21:42:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.413 21:42:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:58.413 [2024-12-10 21:42:59.081104] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:15:58.413 [2024-12-10 21:42:59.081166] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:58.413 [2024-12-10 21:42:59.081187] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:58.413 [2024-12-10 21:42:59.081199] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:58.413 [2024-12-10 21:42:59.083334] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:58.413 [2024-12-10 21:42:59.083376] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:58.413 BaseBdev3 00:15:58.413 21:42:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.413 21:42:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:58.413 21:42:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:15:58.413 21:42:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.413 21:42:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:58.413 BaseBdev4_malloc 00:15:58.413 21:42:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.413 21:42:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:15:58.413 21:42:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.413 21:42:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:58.413 [2024-12-10 21:42:59.135408] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:15:58.413 [2024-12-10 21:42:59.135487] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:58.413 [2024-12-10 21:42:59.135508] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:15:58.413 [2024-12-10 21:42:59.135518] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:58.413 [2024-12-10 21:42:59.137753] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:58.413 [2024-12-10 21:42:59.137794] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:15:58.413 BaseBdev4 00:15:58.413 21:42:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.413 21:42:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:15:58.413 21:42:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.413 21:42:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:58.413 spare_malloc 00:15:58.413 21:42:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.413 21:42:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:15:58.413 21:42:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.413 21:42:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:58.671 spare_delay 00:15:58.671 21:42:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.671 21:42:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:58.671 21:42:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.671 21:42:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:58.671 [2024-12-10 21:42:59.204222] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:58.671 [2024-12-10 21:42:59.204283] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:58.671 [2024-12-10 21:42:59.204301] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:15:58.671 [2024-12-10 21:42:59.204312] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:58.671 [2024-12-10 21:42:59.206646] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:58.671 [2024-12-10 21:42:59.206697] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:58.671 spare 00:15:58.671 21:42:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.671 21:42:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:15:58.671 21:42:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.671 21:42:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:58.671 [2024-12-10 21:42:59.216258] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:58.671 [2024-12-10 21:42:59.218253] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:58.671 [2024-12-10 21:42:59.218328] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:58.671 [2024-12-10 21:42:59.218387] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:58.671 [2024-12-10 21:42:59.218505] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:58.671 [2024-12-10 21:42:59.218524] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:15:58.671 [2024-12-10 21:42:59.218795] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:15:58.672 [2024-12-10 21:42:59.218993] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:58.672 [2024-12-10 21:42:59.219018] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:58.672 [2024-12-10 21:42:59.219195] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:58.672 21:42:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.672 21:42:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:15:58.672 21:42:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:58.672 21:42:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:58.672 21:42:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:58.672 21:42:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:58.672 21:42:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:58.672 21:42:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:58.672 21:42:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:58.672 21:42:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:58.672 21:42:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:58.672 21:42:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:58.672 21:42:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:58.672 21:42:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.672 21:42:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:58.672 21:42:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.672 21:42:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:58.672 "name": "raid_bdev1", 00:15:58.672 "uuid": "9b0e495d-316d-4739-9a89-455fc4b33cc0", 00:15:58.672 "strip_size_kb": 0, 00:15:58.672 "state": "online", 00:15:58.672 "raid_level": "raid1", 00:15:58.672 "superblock": false, 00:15:58.672 "num_base_bdevs": 4, 00:15:58.672 "num_base_bdevs_discovered": 4, 00:15:58.672 "num_base_bdevs_operational": 4, 00:15:58.672 "base_bdevs_list": [ 00:15:58.672 { 00:15:58.672 "name": "BaseBdev1", 00:15:58.672 "uuid": "105ae138-ab4c-5212-92b2-ba9560433eb0", 00:15:58.672 "is_configured": true, 00:15:58.672 "data_offset": 0, 00:15:58.672 "data_size": 65536 00:15:58.672 }, 00:15:58.672 { 00:15:58.672 "name": "BaseBdev2", 00:15:58.672 "uuid": "2b007f0a-6e03-51b7-9de2-e295935c8552", 00:15:58.672 "is_configured": true, 00:15:58.672 "data_offset": 0, 00:15:58.672 "data_size": 65536 00:15:58.672 }, 00:15:58.672 { 00:15:58.672 "name": "BaseBdev3", 00:15:58.672 "uuid": "323a06e0-06c5-5c3d-8f91-b6d0db43fd1c", 00:15:58.672 "is_configured": true, 00:15:58.672 "data_offset": 0, 00:15:58.672 "data_size": 65536 00:15:58.672 }, 00:15:58.672 { 00:15:58.672 "name": "BaseBdev4", 00:15:58.672 "uuid": "4719765d-928f-5a00-9ccc-50527846f5ac", 00:15:58.672 "is_configured": true, 00:15:58.672 "data_offset": 0, 00:15:58.672 "data_size": 65536 00:15:58.672 } 00:15:58.672 ] 00:15:58.672 }' 00:15:58.672 21:42:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:58.672 21:42:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:58.930 21:42:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:58.930 21:42:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.930 21:42:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:58.930 21:42:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:15:58.930 [2024-12-10 21:42:59.655967] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:58.930 21:42:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.930 21:42:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:15:58.930 21:42:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:58.930 21:42:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.930 21:42:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:15:58.930 21:42:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:59.189 21:42:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.189 21:42:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:15:59.189 21:42:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:15:59.189 21:42:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:15:59.189 21:42:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.189 21:42:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:59.189 21:42:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:15:59.189 [2024-12-10 21:42:59.759393] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:59.189 21:42:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.189 21:42:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:59.189 21:42:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:59.189 21:42:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:59.189 21:42:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:59.189 21:42:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:59.189 21:42:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:59.189 21:42:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:59.189 21:42:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:59.189 21:42:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:59.189 21:42:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:59.189 21:42:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:59.189 21:42:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.189 21:42:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:59.189 21:42:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:59.189 21:42:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.189 21:42:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:59.189 "name": "raid_bdev1", 00:15:59.189 "uuid": "9b0e495d-316d-4739-9a89-455fc4b33cc0", 00:15:59.189 "strip_size_kb": 0, 00:15:59.189 "state": "online", 00:15:59.189 "raid_level": "raid1", 00:15:59.189 "superblock": false, 00:15:59.189 "num_base_bdevs": 4, 00:15:59.189 "num_base_bdevs_discovered": 3, 00:15:59.189 "num_base_bdevs_operational": 3, 00:15:59.189 "base_bdevs_list": [ 00:15:59.189 { 00:15:59.189 "name": null, 00:15:59.190 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:59.190 "is_configured": false, 00:15:59.190 "data_offset": 0, 00:15:59.190 "data_size": 65536 00:15:59.190 }, 00:15:59.190 { 00:15:59.190 "name": "BaseBdev2", 00:15:59.190 "uuid": "2b007f0a-6e03-51b7-9de2-e295935c8552", 00:15:59.190 "is_configured": true, 00:15:59.190 "data_offset": 0, 00:15:59.190 "data_size": 65536 00:15:59.190 }, 00:15:59.190 { 00:15:59.190 "name": "BaseBdev3", 00:15:59.190 "uuid": "323a06e0-06c5-5c3d-8f91-b6d0db43fd1c", 00:15:59.190 "is_configured": true, 00:15:59.190 "data_offset": 0, 00:15:59.190 "data_size": 65536 00:15:59.190 }, 00:15:59.190 { 00:15:59.190 "name": "BaseBdev4", 00:15:59.190 "uuid": "4719765d-928f-5a00-9ccc-50527846f5ac", 00:15:59.190 "is_configured": true, 00:15:59.190 "data_offset": 0, 00:15:59.190 "data_size": 65536 00:15:59.190 } 00:15:59.190 ] 00:15:59.190 }' 00:15:59.190 21:42:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:59.190 21:42:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:59.190 [2024-12-10 21:42:59.860281] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:15:59.190 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:59.190 Zero copy mechanism will not be used. 00:15:59.190 Running I/O for 60 seconds... 00:15:59.448 21:43:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:59.448 21:43:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.448 21:43:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:59.706 [2024-12-10 21:43:00.238855] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:59.706 21:43:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.706 21:43:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:15:59.706 [2024-12-10 21:43:00.328017] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:15:59.706 [2024-12-10 21:43:00.330211] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:59.706 [2024-12-10 21:43:00.448531] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:15:59.706 [2024-12-10 21:43:00.449269] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:16:00.274 166.00 IOPS, 498.00 MiB/s [2024-12-10T21:43:01.057Z] [2024-12-10 21:43:00.980759] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:16:00.274 [2024-12-10 21:43:00.981026] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:16:00.532 21:43:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:00.532 21:43:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:00.532 21:43:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:00.532 21:43:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:00.532 21:43:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:00.532 21:43:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:00.532 21:43:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:00.532 21:43:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.532 21:43:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:00.790 21:43:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.790 21:43:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:00.790 "name": "raid_bdev1", 00:16:00.790 "uuid": "9b0e495d-316d-4739-9a89-455fc4b33cc0", 00:16:00.790 "strip_size_kb": 0, 00:16:00.790 "state": "online", 00:16:00.790 "raid_level": "raid1", 00:16:00.790 "superblock": false, 00:16:00.790 "num_base_bdevs": 4, 00:16:00.790 "num_base_bdevs_discovered": 4, 00:16:00.790 "num_base_bdevs_operational": 4, 00:16:00.790 "process": { 00:16:00.790 "type": "rebuild", 00:16:00.790 "target": "spare", 00:16:00.790 "progress": { 00:16:00.790 "blocks": 14336, 00:16:00.790 "percent": 21 00:16:00.790 } 00:16:00.790 }, 00:16:00.790 "base_bdevs_list": [ 00:16:00.790 { 00:16:00.790 "name": "spare", 00:16:00.790 "uuid": "fbdfe519-2b29-5ce1-a46a-1f4cbeb1640a", 00:16:00.790 "is_configured": true, 00:16:00.790 "data_offset": 0, 00:16:00.790 "data_size": 65536 00:16:00.790 }, 00:16:00.790 { 00:16:00.790 "name": "BaseBdev2", 00:16:00.790 "uuid": "2b007f0a-6e03-51b7-9de2-e295935c8552", 00:16:00.790 "is_configured": true, 00:16:00.790 "data_offset": 0, 00:16:00.790 "data_size": 65536 00:16:00.790 }, 00:16:00.790 { 00:16:00.790 "name": "BaseBdev3", 00:16:00.790 "uuid": "323a06e0-06c5-5c3d-8f91-b6d0db43fd1c", 00:16:00.790 "is_configured": true, 00:16:00.790 "data_offset": 0, 00:16:00.790 "data_size": 65536 00:16:00.790 }, 00:16:00.790 { 00:16:00.790 "name": "BaseBdev4", 00:16:00.790 "uuid": "4719765d-928f-5a00-9ccc-50527846f5ac", 00:16:00.790 "is_configured": true, 00:16:00.790 "data_offset": 0, 00:16:00.790 "data_size": 65536 00:16:00.790 } 00:16:00.790 ] 00:16:00.790 }' 00:16:00.790 21:43:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:00.790 21:43:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:00.790 21:43:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:00.790 21:43:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:00.790 21:43:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:00.790 21:43:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.790 21:43:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:00.790 [2024-12-10 21:43:01.468726] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:01.049 [2024-12-10 21:43:01.587076] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:01.049 [2024-12-10 21:43:01.597924] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:01.049 [2024-12-10 21:43:01.597984] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:01.049 [2024-12-10 21:43:01.598004] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:01.049 [2024-12-10 21:43:01.625777] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006220 00:16:01.049 21:43:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.049 21:43:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:16:01.049 21:43:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:01.049 21:43:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:01.049 21:43:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:01.049 21:43:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:01.049 21:43:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:01.049 21:43:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:01.049 21:43:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:01.049 21:43:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:01.049 21:43:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:01.049 21:43:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:01.049 21:43:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:01.049 21:43:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.049 21:43:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:01.049 21:43:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.049 21:43:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:01.049 "name": "raid_bdev1", 00:16:01.049 "uuid": "9b0e495d-316d-4739-9a89-455fc4b33cc0", 00:16:01.049 "strip_size_kb": 0, 00:16:01.049 "state": "online", 00:16:01.049 "raid_level": "raid1", 00:16:01.049 "superblock": false, 00:16:01.049 "num_base_bdevs": 4, 00:16:01.049 "num_base_bdevs_discovered": 3, 00:16:01.049 "num_base_bdevs_operational": 3, 00:16:01.049 "base_bdevs_list": [ 00:16:01.049 { 00:16:01.049 "name": null, 00:16:01.049 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:01.049 "is_configured": false, 00:16:01.049 "data_offset": 0, 00:16:01.049 "data_size": 65536 00:16:01.049 }, 00:16:01.049 { 00:16:01.049 "name": "BaseBdev2", 00:16:01.049 "uuid": "2b007f0a-6e03-51b7-9de2-e295935c8552", 00:16:01.049 "is_configured": true, 00:16:01.049 "data_offset": 0, 00:16:01.049 "data_size": 65536 00:16:01.049 }, 00:16:01.049 { 00:16:01.049 "name": "BaseBdev3", 00:16:01.049 "uuid": "323a06e0-06c5-5c3d-8f91-b6d0db43fd1c", 00:16:01.049 "is_configured": true, 00:16:01.049 "data_offset": 0, 00:16:01.049 "data_size": 65536 00:16:01.049 }, 00:16:01.049 { 00:16:01.049 "name": "BaseBdev4", 00:16:01.049 "uuid": "4719765d-928f-5a00-9ccc-50527846f5ac", 00:16:01.049 "is_configured": true, 00:16:01.049 "data_offset": 0, 00:16:01.049 "data_size": 65536 00:16:01.049 } 00:16:01.049 ] 00:16:01.049 }' 00:16:01.049 21:43:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:01.049 21:43:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:01.307 144.50 IOPS, 433.50 MiB/s [2024-12-10T21:43:02.090Z] 21:43:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:01.307 21:43:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:01.307 21:43:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:01.307 21:43:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:01.307 21:43:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:01.307 21:43:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:01.307 21:43:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:01.307 21:43:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.307 21:43:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:01.565 21:43:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.565 21:43:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:01.565 "name": "raid_bdev1", 00:16:01.565 "uuid": "9b0e495d-316d-4739-9a89-455fc4b33cc0", 00:16:01.565 "strip_size_kb": 0, 00:16:01.565 "state": "online", 00:16:01.565 "raid_level": "raid1", 00:16:01.565 "superblock": false, 00:16:01.565 "num_base_bdevs": 4, 00:16:01.565 "num_base_bdevs_discovered": 3, 00:16:01.565 "num_base_bdevs_operational": 3, 00:16:01.565 "base_bdevs_list": [ 00:16:01.565 { 00:16:01.565 "name": null, 00:16:01.565 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:01.565 "is_configured": false, 00:16:01.565 "data_offset": 0, 00:16:01.565 "data_size": 65536 00:16:01.565 }, 00:16:01.565 { 00:16:01.565 "name": "BaseBdev2", 00:16:01.565 "uuid": "2b007f0a-6e03-51b7-9de2-e295935c8552", 00:16:01.565 "is_configured": true, 00:16:01.565 "data_offset": 0, 00:16:01.565 "data_size": 65536 00:16:01.565 }, 00:16:01.565 { 00:16:01.565 "name": "BaseBdev3", 00:16:01.565 "uuid": "323a06e0-06c5-5c3d-8f91-b6d0db43fd1c", 00:16:01.565 "is_configured": true, 00:16:01.565 "data_offset": 0, 00:16:01.565 "data_size": 65536 00:16:01.565 }, 00:16:01.565 { 00:16:01.565 "name": "BaseBdev4", 00:16:01.565 "uuid": "4719765d-928f-5a00-9ccc-50527846f5ac", 00:16:01.565 "is_configured": true, 00:16:01.565 "data_offset": 0, 00:16:01.565 "data_size": 65536 00:16:01.565 } 00:16:01.565 ] 00:16:01.565 }' 00:16:01.565 21:43:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:01.565 21:43:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:01.565 21:43:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:01.565 21:43:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:01.565 21:43:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:01.565 21:43:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.565 21:43:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:01.565 [2024-12-10 21:43:02.202704] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:01.565 21:43:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.565 21:43:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:16:01.565 [2024-12-10 21:43:02.284879] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:16:01.565 [2024-12-10 21:43:02.287168] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:01.825 [2024-12-10 21:43:02.411415] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:16:01.825 [2024-12-10 21:43:02.412919] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:16:02.084 [2024-12-10 21:43:02.647472] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:16:02.084 [2024-12-10 21:43:02.647977] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:16:02.343 148.67 IOPS, 446.00 MiB/s [2024-12-10T21:43:03.126Z] [2024-12-10 21:43:02.999758] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:16:02.602 [2024-12-10 21:43:03.249405] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:16:02.602 [2024-12-10 21:43:03.250333] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:16:02.602 21:43:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:02.602 21:43:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:02.602 21:43:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:02.602 21:43:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:02.602 21:43:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:02.602 21:43:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:02.602 21:43:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:02.602 21:43:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.602 21:43:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:02.602 21:43:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.602 21:43:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:02.602 "name": "raid_bdev1", 00:16:02.602 "uuid": "9b0e495d-316d-4739-9a89-455fc4b33cc0", 00:16:02.602 "strip_size_kb": 0, 00:16:02.602 "state": "online", 00:16:02.602 "raid_level": "raid1", 00:16:02.602 "superblock": false, 00:16:02.602 "num_base_bdevs": 4, 00:16:02.602 "num_base_bdevs_discovered": 4, 00:16:02.602 "num_base_bdevs_operational": 4, 00:16:02.602 "process": { 00:16:02.602 "type": "rebuild", 00:16:02.602 "target": "spare", 00:16:02.602 "progress": { 00:16:02.602 "blocks": 10240, 00:16:02.602 "percent": 15 00:16:02.602 } 00:16:02.602 }, 00:16:02.602 "base_bdevs_list": [ 00:16:02.602 { 00:16:02.602 "name": "spare", 00:16:02.602 "uuid": "fbdfe519-2b29-5ce1-a46a-1f4cbeb1640a", 00:16:02.602 "is_configured": true, 00:16:02.602 "data_offset": 0, 00:16:02.602 "data_size": 65536 00:16:02.602 }, 00:16:02.602 { 00:16:02.602 "name": "BaseBdev2", 00:16:02.602 "uuid": "2b007f0a-6e03-51b7-9de2-e295935c8552", 00:16:02.602 "is_configured": true, 00:16:02.602 "data_offset": 0, 00:16:02.602 "data_size": 65536 00:16:02.602 }, 00:16:02.602 { 00:16:02.602 "name": "BaseBdev3", 00:16:02.602 "uuid": "323a06e0-06c5-5c3d-8f91-b6d0db43fd1c", 00:16:02.602 "is_configured": true, 00:16:02.602 "data_offset": 0, 00:16:02.602 "data_size": 65536 00:16:02.602 }, 00:16:02.602 { 00:16:02.602 "name": "BaseBdev4", 00:16:02.602 "uuid": "4719765d-928f-5a00-9ccc-50527846f5ac", 00:16:02.602 "is_configured": true, 00:16:02.602 "data_offset": 0, 00:16:02.602 "data_size": 65536 00:16:02.602 } 00:16:02.602 ] 00:16:02.602 }' 00:16:02.602 21:43:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:02.602 21:43:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:02.602 21:43:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:02.870 21:43:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:02.870 21:43:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:16:02.870 21:43:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:16:02.870 21:43:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:16:02.870 21:43:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:16:02.871 21:43:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:16:02.871 21:43:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.871 21:43:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:02.871 [2024-12-10 21:43:03.411692] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:02.871 [2024-12-10 21:43:03.585802] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:16:02.871 [2024-12-10 21:43:03.585948] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000063c0 00:16:02.871 21:43:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.871 21:43:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:16:02.871 21:43:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:16:02.871 21:43:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:02.871 21:43:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:02.871 21:43:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:02.871 21:43:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:02.871 21:43:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:02.871 21:43:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:02.871 21:43:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.871 21:43:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:02.871 21:43:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:02.871 21:43:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.146 21:43:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:03.146 "name": "raid_bdev1", 00:16:03.146 "uuid": "9b0e495d-316d-4739-9a89-455fc4b33cc0", 00:16:03.146 "strip_size_kb": 0, 00:16:03.146 "state": "online", 00:16:03.146 "raid_level": "raid1", 00:16:03.146 "superblock": false, 00:16:03.146 "num_base_bdevs": 4, 00:16:03.146 "num_base_bdevs_discovered": 3, 00:16:03.146 "num_base_bdevs_operational": 3, 00:16:03.146 "process": { 00:16:03.147 "type": "rebuild", 00:16:03.147 "target": "spare", 00:16:03.147 "progress": { 00:16:03.147 "blocks": 12288, 00:16:03.147 "percent": 18 00:16:03.147 } 00:16:03.147 }, 00:16:03.147 "base_bdevs_list": [ 00:16:03.147 { 00:16:03.147 "name": "spare", 00:16:03.147 "uuid": "fbdfe519-2b29-5ce1-a46a-1f4cbeb1640a", 00:16:03.147 "is_configured": true, 00:16:03.147 "data_offset": 0, 00:16:03.147 "data_size": 65536 00:16:03.147 }, 00:16:03.147 { 00:16:03.147 "name": null, 00:16:03.147 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:03.147 "is_configured": false, 00:16:03.147 "data_offset": 0, 00:16:03.147 "data_size": 65536 00:16:03.147 }, 00:16:03.147 { 00:16:03.147 "name": "BaseBdev3", 00:16:03.147 "uuid": "323a06e0-06c5-5c3d-8f91-b6d0db43fd1c", 00:16:03.147 "is_configured": true, 00:16:03.147 "data_offset": 0, 00:16:03.147 "data_size": 65536 00:16:03.147 }, 00:16:03.147 { 00:16:03.147 "name": "BaseBdev4", 00:16:03.147 "uuid": "4719765d-928f-5a00-9ccc-50527846f5ac", 00:16:03.147 "is_configured": true, 00:16:03.147 "data_offset": 0, 00:16:03.147 "data_size": 65536 00:16:03.147 } 00:16:03.147 ] 00:16:03.147 }' 00:16:03.147 21:43:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:03.147 21:43:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:03.147 21:43:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:03.147 21:43:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:03.147 21:43:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=495 00:16:03.147 21:43:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:03.147 21:43:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:03.147 21:43:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:03.147 21:43:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:03.147 21:43:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:03.147 21:43:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:03.147 21:43:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:03.147 21:43:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:03.147 21:43:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.147 21:43:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:03.147 21:43:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.147 21:43:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:03.147 "name": "raid_bdev1", 00:16:03.147 "uuid": "9b0e495d-316d-4739-9a89-455fc4b33cc0", 00:16:03.147 "strip_size_kb": 0, 00:16:03.147 "state": "online", 00:16:03.147 "raid_level": "raid1", 00:16:03.147 "superblock": false, 00:16:03.147 "num_base_bdevs": 4, 00:16:03.147 "num_base_bdevs_discovered": 3, 00:16:03.147 "num_base_bdevs_operational": 3, 00:16:03.147 "process": { 00:16:03.147 "type": "rebuild", 00:16:03.147 "target": "spare", 00:16:03.147 "progress": { 00:16:03.147 "blocks": 14336, 00:16:03.147 "percent": 21 00:16:03.147 } 00:16:03.147 }, 00:16:03.147 "base_bdevs_list": [ 00:16:03.147 { 00:16:03.147 "name": "spare", 00:16:03.147 "uuid": "fbdfe519-2b29-5ce1-a46a-1f4cbeb1640a", 00:16:03.147 "is_configured": true, 00:16:03.147 "data_offset": 0, 00:16:03.147 "data_size": 65536 00:16:03.147 }, 00:16:03.147 { 00:16:03.147 "name": null, 00:16:03.147 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:03.147 "is_configured": false, 00:16:03.147 "data_offset": 0, 00:16:03.147 "data_size": 65536 00:16:03.147 }, 00:16:03.147 { 00:16:03.147 "name": "BaseBdev3", 00:16:03.147 "uuid": "323a06e0-06c5-5c3d-8f91-b6d0db43fd1c", 00:16:03.147 "is_configured": true, 00:16:03.147 "data_offset": 0, 00:16:03.147 "data_size": 65536 00:16:03.147 }, 00:16:03.147 { 00:16:03.147 "name": "BaseBdev4", 00:16:03.147 "uuid": "4719765d-928f-5a00-9ccc-50527846f5ac", 00:16:03.147 "is_configured": true, 00:16:03.147 "data_offset": 0, 00:16:03.147 "data_size": 65536 00:16:03.147 } 00:16:03.147 ] 00:16:03.147 }' 00:16:03.147 21:43:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:03.147 21:43:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:03.147 21:43:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:03.147 [2024-12-10 21:43:03.846450] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:16:03.147 21:43:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:03.147 21:43:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:03.404 130.25 IOPS, 390.75 MiB/s [2024-12-10T21:43:04.187Z] [2024-12-10 21:43:04.081466] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:16:03.662 [2024-12-10 21:43:04.198378] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:16:03.919 [2024-12-10 21:43:04.519851] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:16:03.919 [2024-12-10 21:43:04.621600] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:16:03.919 [2024-12-10 21:43:04.622065] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:16:04.178 114.00 IOPS, 342.00 MiB/s [2024-12-10T21:43:04.961Z] 21:43:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:04.178 21:43:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:04.178 21:43:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:04.178 21:43:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:04.178 21:43:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:04.178 21:43:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:04.178 21:43:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:04.178 21:43:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:04.178 21:43:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.178 21:43:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:04.178 21:43:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.178 21:43:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:04.178 "name": "raid_bdev1", 00:16:04.178 "uuid": "9b0e495d-316d-4739-9a89-455fc4b33cc0", 00:16:04.178 "strip_size_kb": 0, 00:16:04.178 "state": "online", 00:16:04.178 "raid_level": "raid1", 00:16:04.178 "superblock": false, 00:16:04.178 "num_base_bdevs": 4, 00:16:04.178 "num_base_bdevs_discovered": 3, 00:16:04.178 "num_base_bdevs_operational": 3, 00:16:04.178 "process": { 00:16:04.178 "type": "rebuild", 00:16:04.178 "target": "spare", 00:16:04.178 "progress": { 00:16:04.178 "blocks": 32768, 00:16:04.179 "percent": 50 00:16:04.179 } 00:16:04.179 }, 00:16:04.179 "base_bdevs_list": [ 00:16:04.179 { 00:16:04.179 "name": "spare", 00:16:04.179 "uuid": "fbdfe519-2b29-5ce1-a46a-1f4cbeb1640a", 00:16:04.179 "is_configured": true, 00:16:04.179 "data_offset": 0, 00:16:04.179 "data_size": 65536 00:16:04.179 }, 00:16:04.179 { 00:16:04.179 "name": null, 00:16:04.179 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:04.179 "is_configured": false, 00:16:04.179 "data_offset": 0, 00:16:04.179 "data_size": 65536 00:16:04.179 }, 00:16:04.179 { 00:16:04.179 "name": "BaseBdev3", 00:16:04.179 "uuid": "323a06e0-06c5-5c3d-8f91-b6d0db43fd1c", 00:16:04.179 "is_configured": true, 00:16:04.179 "data_offset": 0, 00:16:04.179 "data_size": 65536 00:16:04.179 }, 00:16:04.179 { 00:16:04.179 "name": "BaseBdev4", 00:16:04.179 "uuid": "4719765d-928f-5a00-9ccc-50527846f5ac", 00:16:04.179 "is_configured": true, 00:16:04.179 "data_offset": 0, 00:16:04.179 "data_size": 65536 00:16:04.179 } 00:16:04.179 ] 00:16:04.179 }' 00:16:04.179 21:43:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:04.439 21:43:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:04.439 21:43:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:04.439 21:43:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:04.439 21:43:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:05.006 [2024-12-10 21:43:05.579950] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:16:05.006 [2024-12-10 21:43:05.580550] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:16:05.265 [2024-12-10 21:43:05.798255] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:16:05.265 101.00 IOPS, 303.00 MiB/s [2024-12-10T21:43:06.048Z] 21:43:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:05.265 21:43:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:05.265 21:43:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:05.265 21:43:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:05.265 21:43:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:05.265 21:43:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:05.524 21:43:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:05.524 21:43:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:05.524 21:43:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.524 21:43:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:05.524 21:43:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.524 21:43:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:05.524 "name": "raid_bdev1", 00:16:05.524 "uuid": "9b0e495d-316d-4739-9a89-455fc4b33cc0", 00:16:05.524 "strip_size_kb": 0, 00:16:05.524 "state": "online", 00:16:05.524 "raid_level": "raid1", 00:16:05.524 "superblock": false, 00:16:05.524 "num_base_bdevs": 4, 00:16:05.524 "num_base_bdevs_discovered": 3, 00:16:05.524 "num_base_bdevs_operational": 3, 00:16:05.524 "process": { 00:16:05.524 "type": "rebuild", 00:16:05.524 "target": "spare", 00:16:05.524 "progress": { 00:16:05.524 "blocks": 51200, 00:16:05.524 "percent": 78 00:16:05.524 } 00:16:05.524 }, 00:16:05.524 "base_bdevs_list": [ 00:16:05.524 { 00:16:05.524 "name": "spare", 00:16:05.524 "uuid": "fbdfe519-2b29-5ce1-a46a-1f4cbeb1640a", 00:16:05.524 "is_configured": true, 00:16:05.524 "data_offset": 0, 00:16:05.524 "data_size": 65536 00:16:05.524 }, 00:16:05.524 { 00:16:05.524 "name": null, 00:16:05.524 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:05.524 "is_configured": false, 00:16:05.524 "data_offset": 0, 00:16:05.524 "data_size": 65536 00:16:05.524 }, 00:16:05.524 { 00:16:05.524 "name": "BaseBdev3", 00:16:05.524 "uuid": "323a06e0-06c5-5c3d-8f91-b6d0db43fd1c", 00:16:05.524 "is_configured": true, 00:16:05.524 "data_offset": 0, 00:16:05.524 "data_size": 65536 00:16:05.524 }, 00:16:05.524 { 00:16:05.524 "name": "BaseBdev4", 00:16:05.524 "uuid": "4719765d-928f-5a00-9ccc-50527846f5ac", 00:16:05.524 "is_configured": true, 00:16:05.524 "data_offset": 0, 00:16:05.524 "data_size": 65536 00:16:05.524 } 00:16:05.524 ] 00:16:05.524 }' 00:16:05.524 21:43:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:05.524 21:43:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:05.524 21:43:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:05.524 21:43:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:05.524 21:43:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:06.092 [2024-12-10 21:43:06.805300] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:16:06.352 92.71 IOPS, 278.14 MiB/s [2024-12-10T21:43:07.135Z] [2024-12-10 21:43:06.911095] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:16:06.352 [2024-12-10 21:43:06.915492] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:06.611 21:43:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:06.611 21:43:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:06.611 21:43:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:06.611 21:43:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:06.611 21:43:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:06.611 21:43:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:06.611 21:43:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:06.611 21:43:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:06.611 21:43:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.611 21:43:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:06.611 21:43:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.611 21:43:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:06.611 "name": "raid_bdev1", 00:16:06.611 "uuid": "9b0e495d-316d-4739-9a89-455fc4b33cc0", 00:16:06.611 "strip_size_kb": 0, 00:16:06.611 "state": "online", 00:16:06.611 "raid_level": "raid1", 00:16:06.611 "superblock": false, 00:16:06.611 "num_base_bdevs": 4, 00:16:06.611 "num_base_bdevs_discovered": 3, 00:16:06.611 "num_base_bdevs_operational": 3, 00:16:06.611 "base_bdevs_list": [ 00:16:06.611 { 00:16:06.611 "name": "spare", 00:16:06.611 "uuid": "fbdfe519-2b29-5ce1-a46a-1f4cbeb1640a", 00:16:06.611 "is_configured": true, 00:16:06.611 "data_offset": 0, 00:16:06.611 "data_size": 65536 00:16:06.611 }, 00:16:06.611 { 00:16:06.611 "name": null, 00:16:06.611 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:06.611 "is_configured": false, 00:16:06.611 "data_offset": 0, 00:16:06.611 "data_size": 65536 00:16:06.611 }, 00:16:06.611 { 00:16:06.611 "name": "BaseBdev3", 00:16:06.611 "uuid": "323a06e0-06c5-5c3d-8f91-b6d0db43fd1c", 00:16:06.611 "is_configured": true, 00:16:06.611 "data_offset": 0, 00:16:06.611 "data_size": 65536 00:16:06.611 }, 00:16:06.611 { 00:16:06.611 "name": "BaseBdev4", 00:16:06.611 "uuid": "4719765d-928f-5a00-9ccc-50527846f5ac", 00:16:06.611 "is_configured": true, 00:16:06.611 "data_offset": 0, 00:16:06.611 "data_size": 65536 00:16:06.611 } 00:16:06.611 ] 00:16:06.611 }' 00:16:06.611 21:43:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:06.611 21:43:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:16:06.611 21:43:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:06.611 21:43:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:16:06.611 21:43:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:16:06.611 21:43:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:06.611 21:43:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:06.611 21:43:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:06.611 21:43:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:06.611 21:43:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:06.611 21:43:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:06.611 21:43:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.611 21:43:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:06.611 21:43:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:06.611 21:43:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.611 21:43:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:06.611 "name": "raid_bdev1", 00:16:06.611 "uuid": "9b0e495d-316d-4739-9a89-455fc4b33cc0", 00:16:06.611 "strip_size_kb": 0, 00:16:06.611 "state": "online", 00:16:06.611 "raid_level": "raid1", 00:16:06.611 "superblock": false, 00:16:06.611 "num_base_bdevs": 4, 00:16:06.611 "num_base_bdevs_discovered": 3, 00:16:06.611 "num_base_bdevs_operational": 3, 00:16:06.611 "base_bdevs_list": [ 00:16:06.611 { 00:16:06.611 "name": "spare", 00:16:06.611 "uuid": "fbdfe519-2b29-5ce1-a46a-1f4cbeb1640a", 00:16:06.611 "is_configured": true, 00:16:06.611 "data_offset": 0, 00:16:06.611 "data_size": 65536 00:16:06.611 }, 00:16:06.611 { 00:16:06.611 "name": null, 00:16:06.611 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:06.611 "is_configured": false, 00:16:06.611 "data_offset": 0, 00:16:06.611 "data_size": 65536 00:16:06.611 }, 00:16:06.611 { 00:16:06.611 "name": "BaseBdev3", 00:16:06.611 "uuid": "323a06e0-06c5-5c3d-8f91-b6d0db43fd1c", 00:16:06.611 "is_configured": true, 00:16:06.611 "data_offset": 0, 00:16:06.611 "data_size": 65536 00:16:06.611 }, 00:16:06.611 { 00:16:06.611 "name": "BaseBdev4", 00:16:06.611 "uuid": "4719765d-928f-5a00-9ccc-50527846f5ac", 00:16:06.611 "is_configured": true, 00:16:06.611 "data_offset": 0, 00:16:06.611 "data_size": 65536 00:16:06.611 } 00:16:06.611 ] 00:16:06.611 }' 00:16:06.611 21:43:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:06.869 21:43:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:06.870 21:43:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:06.870 21:43:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:06.870 21:43:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:16:06.870 21:43:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:06.870 21:43:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:06.870 21:43:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:06.870 21:43:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:06.870 21:43:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:06.870 21:43:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:06.870 21:43:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:06.870 21:43:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:06.870 21:43:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:06.870 21:43:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:06.870 21:43:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.870 21:43:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:06.870 21:43:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:06.870 21:43:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.870 21:43:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:06.870 "name": "raid_bdev1", 00:16:06.870 "uuid": "9b0e495d-316d-4739-9a89-455fc4b33cc0", 00:16:06.870 "strip_size_kb": 0, 00:16:06.870 "state": "online", 00:16:06.870 "raid_level": "raid1", 00:16:06.870 "superblock": false, 00:16:06.870 "num_base_bdevs": 4, 00:16:06.870 "num_base_bdevs_discovered": 3, 00:16:06.870 "num_base_bdevs_operational": 3, 00:16:06.870 "base_bdevs_list": [ 00:16:06.870 { 00:16:06.870 "name": "spare", 00:16:06.870 "uuid": "fbdfe519-2b29-5ce1-a46a-1f4cbeb1640a", 00:16:06.870 "is_configured": true, 00:16:06.870 "data_offset": 0, 00:16:06.870 "data_size": 65536 00:16:06.870 }, 00:16:06.870 { 00:16:06.870 "name": null, 00:16:06.870 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:06.870 "is_configured": false, 00:16:06.870 "data_offset": 0, 00:16:06.870 "data_size": 65536 00:16:06.870 }, 00:16:06.870 { 00:16:06.870 "name": "BaseBdev3", 00:16:06.870 "uuid": "323a06e0-06c5-5c3d-8f91-b6d0db43fd1c", 00:16:06.870 "is_configured": true, 00:16:06.870 "data_offset": 0, 00:16:06.870 "data_size": 65536 00:16:06.870 }, 00:16:06.870 { 00:16:06.870 "name": "BaseBdev4", 00:16:06.870 "uuid": "4719765d-928f-5a00-9ccc-50527846f5ac", 00:16:06.870 "is_configured": true, 00:16:06.870 "data_offset": 0, 00:16:06.870 "data_size": 65536 00:16:06.870 } 00:16:06.870 ] 00:16:06.870 }' 00:16:06.870 21:43:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:06.870 21:43:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:07.386 84.75 IOPS, 254.25 MiB/s [2024-12-10T21:43:08.169Z] 21:43:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:07.386 21:43:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.386 21:43:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:07.386 [2024-12-10 21:43:07.939724] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:07.386 [2024-12-10 21:43:07.939759] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:07.386 00:16:07.386 Latency(us) 00:16:07.386 [2024-12-10T21:43:08.169Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:07.386 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:16:07.386 raid_bdev1 : 8.16 83.69 251.08 0.00 0.00 16931.30 311.22 119052.30 00:16:07.386 [2024-12-10T21:43:08.169Z] =================================================================================================================== 00:16:07.386 [2024-12-10T21:43:08.169Z] Total : 83.69 251.08 0.00 0.00 16931.30 311.22 119052.30 00:16:07.386 { 00:16:07.386 "results": [ 00:16:07.386 { 00:16:07.386 "job": "raid_bdev1", 00:16:07.386 "core_mask": "0x1", 00:16:07.386 "workload": "randrw", 00:16:07.386 "percentage": 50, 00:16:07.386 "status": "finished", 00:16:07.386 "queue_depth": 2, 00:16:07.386 "io_size": 3145728, 00:16:07.386 "runtime": 8.160813, 00:16:07.386 "iops": 83.69264189732077, 00:16:07.386 "mibps": 251.07792569196232, 00:16:07.386 "io_failed": 0, 00:16:07.386 "io_timeout": 0, 00:16:07.386 "avg_latency_us": 16931.300885510238, 00:16:07.386 "min_latency_us": 311.22445414847164, 00:16:07.386 "max_latency_us": 119052.29694323144 00:16:07.386 } 00:16:07.386 ], 00:16:07.386 "core_count": 1 00:16:07.386 } 00:16:07.386 [2024-12-10 21:43:08.032504] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:07.386 [2024-12-10 21:43:08.032597] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:07.386 [2024-12-10 21:43:08.032696] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:07.386 [2024-12-10 21:43:08.032712] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:07.386 21:43:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.386 21:43:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:07.386 21:43:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:16:07.386 21:43:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.386 21:43:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:07.386 21:43:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.386 21:43:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:16:07.386 21:43:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:16:07.386 21:43:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:16:07.386 21:43:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:16:07.386 21:43:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:07.386 21:43:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:16:07.386 21:43:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:07.386 21:43:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:16:07.386 21:43:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:07.386 21:43:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:16:07.386 21:43:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:07.386 21:43:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:07.386 21:43:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:16:07.644 /dev/nbd0 00:16:07.644 21:43:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:07.644 21:43:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:07.644 21:43:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:07.644 21:43:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:16:07.644 21:43:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:07.645 21:43:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:07.645 21:43:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:07.645 21:43:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:16:07.645 21:43:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:07.645 21:43:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:07.645 21:43:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:07.645 1+0 records in 00:16:07.645 1+0 records out 00:16:07.645 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000185083 s, 22.1 MB/s 00:16:07.645 21:43:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:07.645 21:43:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:16:07.645 21:43:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:07.645 21:43:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:07.645 21:43:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:16:07.645 21:43:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:07.645 21:43:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:07.645 21:43:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:16:07.645 21:43:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:16:07.645 21:43:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@728 -- # continue 00:16:07.645 21:43:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:16:07.645 21:43:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:16:07.645 21:43:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:16:07.645 21:43:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:07.645 21:43:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:16:07.645 21:43:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:07.645 21:43:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:16:07.645 21:43:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:07.645 21:43:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:16:07.645 21:43:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:07.645 21:43:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:07.645 21:43:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:16:07.903 /dev/nbd1 00:16:07.903 21:43:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:07.903 21:43:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:07.903 21:43:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:16:07.903 21:43:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:16:07.903 21:43:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:07.903 21:43:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:07.903 21:43:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:16:07.903 21:43:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:16:07.903 21:43:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:07.903 21:43:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:07.903 21:43:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:07.903 1+0 records in 00:16:07.903 1+0 records out 00:16:07.903 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000558497 s, 7.3 MB/s 00:16:07.903 21:43:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:07.903 21:43:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:16:07.903 21:43:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:07.903 21:43:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:07.903 21:43:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:16:07.903 21:43:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:07.903 21:43:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:07.903 21:43:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:16:08.160 21:43:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:16:08.160 21:43:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:08.160 21:43:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:16:08.160 21:43:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:08.160 21:43:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:16:08.160 21:43:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:08.160 21:43:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:16:08.418 21:43:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:08.418 21:43:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:08.418 21:43:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:08.418 21:43:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:08.418 21:43:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:08.418 21:43:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:08.418 21:43:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:16:08.418 21:43:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:16:08.418 21:43:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:16:08.418 21:43:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:16:08.418 21:43:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:16:08.418 21:43:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:08.418 21:43:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:16:08.418 21:43:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:08.418 21:43:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:16:08.418 21:43:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:08.418 21:43:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:16:08.418 21:43:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:08.418 21:43:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:08.418 21:43:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:16:08.676 /dev/nbd1 00:16:08.676 21:43:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:08.676 21:43:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:08.676 21:43:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:16:08.676 21:43:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:16:08.676 21:43:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:08.676 21:43:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:08.676 21:43:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:16:08.676 21:43:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:16:08.676 21:43:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:08.676 21:43:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:08.676 21:43:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:08.676 1+0 records in 00:16:08.676 1+0 records out 00:16:08.676 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000515646 s, 7.9 MB/s 00:16:08.676 21:43:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:08.676 21:43:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:16:08.676 21:43:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:08.676 21:43:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:08.676 21:43:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:16:08.676 21:43:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:08.676 21:43:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:08.676 21:43:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:16:08.677 21:43:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:16:08.677 21:43:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:08.677 21:43:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:16:08.677 21:43:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:08.677 21:43:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:16:08.677 21:43:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:08.677 21:43:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:16:08.935 21:43:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:08.936 21:43:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:08.936 21:43:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:08.936 21:43:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:08.936 21:43:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:08.936 21:43:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:08.936 21:43:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:16:08.936 21:43:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:16:08.936 21:43:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:16:08.936 21:43:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:08.936 21:43:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:08.936 21:43:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:08.936 21:43:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:16:08.936 21:43:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:08.936 21:43:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:09.194 21:43:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:09.194 21:43:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:09.194 21:43:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:09.194 21:43:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:09.194 21:43:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:09.194 21:43:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:09.194 21:43:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:16:09.194 21:43:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:16:09.194 21:43:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:16:09.194 21:43:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 78919 00:16:09.194 21:43:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # '[' -z 78919 ']' 00:16:09.194 21:43:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # kill -0 78919 00:16:09.194 21:43:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # uname 00:16:09.194 21:43:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:09.194 21:43:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78919 00:16:09.194 21:43:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:09.194 21:43:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:09.194 21:43:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78919' 00:16:09.194 killing process with pid 78919 00:16:09.194 21:43:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@973 -- # kill 78919 00:16:09.194 Received shutdown signal, test time was about 10.060153 seconds 00:16:09.194 00:16:09.194 Latency(us) 00:16:09.194 [2024-12-10T21:43:09.977Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:09.194 [2024-12-10T21:43:09.977Z] =================================================================================================================== 00:16:09.194 [2024-12-10T21:43:09.977Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:09.194 [2024-12-10 21:43:09.903290] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:09.194 21:43:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@978 -- # wait 78919 00:16:09.760 [2024-12-10 21:43:10.361316] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:11.134 21:43:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:16:11.134 00:16:11.134 real 0m13.650s 00:16:11.134 user 0m17.202s 00:16:11.134 sys 0m1.821s 00:16:11.134 21:43:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:11.134 21:43:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:11.134 ************************************ 00:16:11.134 END TEST raid_rebuild_test_io 00:16:11.134 ************************************ 00:16:11.134 21:43:11 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 4 true true true 00:16:11.134 21:43:11 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:16:11.134 21:43:11 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:11.134 21:43:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:11.134 ************************************ 00:16:11.134 START TEST raid_rebuild_test_sb_io 00:16:11.134 ************************************ 00:16:11.134 21:43:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 true true true 00:16:11.134 21:43:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:16:11.134 21:43:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:16:11.134 21:43:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:16:11.134 21:43:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:16:11.134 21:43:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:16:11.134 21:43:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:16:11.134 21:43:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:11.134 21:43:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:16:11.134 21:43:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:11.134 21:43:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:11.134 21:43:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:16:11.134 21:43:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:11.134 21:43:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:11.134 21:43:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:16:11.134 21:43:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:11.134 21:43:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:11.134 21:43:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:16:11.134 21:43:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:11.134 21:43:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:11.134 21:43:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:11.134 21:43:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:16:11.134 21:43:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:16:11.134 21:43:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:16:11.134 21:43:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:16:11.134 21:43:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:16:11.134 21:43:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:16:11.134 21:43:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:16:11.134 21:43:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:16:11.134 21:43:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:16:11.134 21:43:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:16:11.134 21:43:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=79334 00:16:11.134 21:43:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:16:11.134 21:43:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 79334 00:16:11.134 21:43:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # '[' -z 79334 ']' 00:16:11.134 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:11.134 21:43:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:11.134 21:43:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:11.134 21:43:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:11.134 21:43:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:11.134 21:43:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:11.134 I/O size of 3145728 is greater than zero copy threshold (65536). 00:16:11.134 Zero copy mechanism will not be used. 00:16:11.134 [2024-12-10 21:43:11.776106] Starting SPDK v25.01-pre git sha1 cec5ba284 / DPDK 24.03.0 initialization... 00:16:11.134 [2024-12-10 21:43:11.776215] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79334 ] 00:16:11.398 [2024-12-10 21:43:11.950612] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:11.398 [2024-12-10 21:43:12.069086] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:16:11.666 [2024-12-10 21:43:12.279478] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:11.666 [2024-12-10 21:43:12.279544] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:11.934 21:43:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:11.935 21:43:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # return 0 00:16:11.935 21:43:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:11.935 21:43:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:11.935 21:43:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.935 21:43:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:11.935 BaseBdev1_malloc 00:16:11.935 21:43:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.935 21:43:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:11.935 21:43:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.935 21:43:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:11.935 [2024-12-10 21:43:12.699485] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:11.935 [2024-12-10 21:43:12.699612] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:11.935 [2024-12-10 21:43:12.699659] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:11.935 [2024-12-10 21:43:12.699698] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:11.935 [2024-12-10 21:43:12.702207] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:11.935 [2024-12-10 21:43:12.702304] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:11.935 BaseBdev1 00:16:11.935 21:43:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.935 21:43:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:11.935 21:43:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:11.935 21:43:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.935 21:43:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:12.194 BaseBdev2_malloc 00:16:12.194 21:43:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.194 21:43:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:16:12.194 21:43:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.194 21:43:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:12.194 [2024-12-10 21:43:12.754527] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:16:12.194 [2024-12-10 21:43:12.754682] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:12.194 [2024-12-10 21:43:12.754711] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:12.194 [2024-12-10 21:43:12.754725] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:12.194 [2024-12-10 21:43:12.757254] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:12.194 BaseBdev2 00:16:12.194 [2024-12-10 21:43:12.757346] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:12.194 21:43:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.194 21:43:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:12.194 21:43:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:16:12.194 21:43:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.194 21:43:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:12.194 BaseBdev3_malloc 00:16:12.194 21:43:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.194 21:43:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:16:12.194 21:43:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.194 21:43:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:12.194 [2024-12-10 21:43:12.823731] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:16:12.194 [2024-12-10 21:43:12.823799] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:12.194 [2024-12-10 21:43:12.823823] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:12.194 [2024-12-10 21:43:12.823833] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:12.194 [2024-12-10 21:43:12.826343] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:12.194 [2024-12-10 21:43:12.826387] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:16:12.194 BaseBdev3 00:16:12.194 21:43:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.194 21:43:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:12.194 21:43:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:16:12.194 21:43:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.194 21:43:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:12.194 BaseBdev4_malloc 00:16:12.194 21:43:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.194 21:43:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:16:12.194 21:43:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.194 21:43:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:12.194 [2024-12-10 21:43:12.879801] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:16:12.194 [2024-12-10 21:43:12.879990] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:12.194 [2024-12-10 21:43:12.880047] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:16:12.195 [2024-12-10 21:43:12.880090] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:12.195 [2024-12-10 21:43:12.882511] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:12.195 [2024-12-10 21:43:12.882588] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:16:12.195 BaseBdev4 00:16:12.195 21:43:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.195 21:43:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:16:12.195 21:43:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.195 21:43:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:12.195 spare_malloc 00:16:12.195 21:43:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.195 21:43:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:16:12.195 21:43:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.195 21:43:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:12.195 spare_delay 00:16:12.195 21:43:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.195 21:43:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:12.195 21:43:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.195 21:43:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:12.195 [2024-12-10 21:43:12.946781] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:12.195 [2024-12-10 21:43:12.946853] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:12.195 [2024-12-10 21:43:12.946876] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:16:12.195 [2024-12-10 21:43:12.946889] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:12.195 [2024-12-10 21:43:12.949290] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:12.195 [2024-12-10 21:43:12.949336] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:12.195 spare 00:16:12.195 21:43:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.195 21:43:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:16:12.195 21:43:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.195 21:43:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:12.195 [2024-12-10 21:43:12.958801] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:12.195 [2024-12-10 21:43:12.961069] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:12.195 [2024-12-10 21:43:12.961196] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:12.195 [2024-12-10 21:43:12.961296] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:12.195 [2024-12-10 21:43:12.961571] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:12.195 [2024-12-10 21:43:12.961627] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:12.195 [2024-12-10 21:43:12.961940] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:16:12.195 [2024-12-10 21:43:12.962193] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:12.195 [2024-12-10 21:43:12.962242] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:12.195 [2024-12-10 21:43:12.962487] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:12.195 21:43:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.195 21:43:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:16:12.195 21:43:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:12.195 21:43:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:12.195 21:43:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:12.195 21:43:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:12.195 21:43:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:12.195 21:43:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:12.195 21:43:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:12.195 21:43:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:12.195 21:43:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:12.195 21:43:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:12.195 21:43:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.195 21:43:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:12.195 21:43:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:12.455 21:43:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.455 21:43:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:12.455 "name": "raid_bdev1", 00:16:12.455 "uuid": "65f65ed6-f4d7-4931-86be-7397420671c1", 00:16:12.455 "strip_size_kb": 0, 00:16:12.455 "state": "online", 00:16:12.455 "raid_level": "raid1", 00:16:12.455 "superblock": true, 00:16:12.455 "num_base_bdevs": 4, 00:16:12.455 "num_base_bdevs_discovered": 4, 00:16:12.455 "num_base_bdevs_operational": 4, 00:16:12.455 "base_bdevs_list": [ 00:16:12.455 { 00:16:12.455 "name": "BaseBdev1", 00:16:12.455 "uuid": "e390e292-3e58-578a-81ea-5d45cb13a99f", 00:16:12.455 "is_configured": true, 00:16:12.455 "data_offset": 2048, 00:16:12.455 "data_size": 63488 00:16:12.455 }, 00:16:12.455 { 00:16:12.455 "name": "BaseBdev2", 00:16:12.455 "uuid": "0dc0939e-bd6b-5351-877f-92bebc20e930", 00:16:12.455 "is_configured": true, 00:16:12.455 "data_offset": 2048, 00:16:12.455 "data_size": 63488 00:16:12.455 }, 00:16:12.455 { 00:16:12.455 "name": "BaseBdev3", 00:16:12.455 "uuid": "0026972d-7c27-57da-862e-988f53cbf4b7", 00:16:12.455 "is_configured": true, 00:16:12.455 "data_offset": 2048, 00:16:12.455 "data_size": 63488 00:16:12.455 }, 00:16:12.455 { 00:16:12.455 "name": "BaseBdev4", 00:16:12.455 "uuid": "e5aac795-c0af-51d6-b5a1-c966f63fd0e5", 00:16:12.455 "is_configured": true, 00:16:12.455 "data_offset": 2048, 00:16:12.455 "data_size": 63488 00:16:12.455 } 00:16:12.455 ] 00:16:12.455 }' 00:16:12.455 21:43:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:12.455 21:43:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:12.714 21:43:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:16:12.714 21:43:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:12.714 21:43:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.714 21:43:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:12.714 [2024-12-10 21:43:13.454486] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:12.714 21:43:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.973 21:43:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:16:12.973 21:43:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:12.973 21:43:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.973 21:43:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:12.973 21:43:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:16:12.973 21:43:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.973 21:43:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:16:12.973 21:43:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:16:12.973 21:43:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:16:12.973 21:43:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:16:12.973 21:43:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.973 21:43:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:12.973 [2024-12-10 21:43:13.557786] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:12.973 21:43:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.973 21:43:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:16:12.973 21:43:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:12.973 21:43:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:12.973 21:43:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:12.973 21:43:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:12.973 21:43:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:12.973 21:43:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:12.973 21:43:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:12.973 21:43:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:12.973 21:43:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:12.973 21:43:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:12.973 21:43:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:12.973 21:43:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.973 21:43:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:12.973 21:43:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.973 21:43:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:12.973 "name": "raid_bdev1", 00:16:12.973 "uuid": "65f65ed6-f4d7-4931-86be-7397420671c1", 00:16:12.973 "strip_size_kb": 0, 00:16:12.973 "state": "online", 00:16:12.973 "raid_level": "raid1", 00:16:12.973 "superblock": true, 00:16:12.973 "num_base_bdevs": 4, 00:16:12.973 "num_base_bdevs_discovered": 3, 00:16:12.973 "num_base_bdevs_operational": 3, 00:16:12.973 "base_bdevs_list": [ 00:16:12.973 { 00:16:12.973 "name": null, 00:16:12.973 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:12.973 "is_configured": false, 00:16:12.974 "data_offset": 0, 00:16:12.974 "data_size": 63488 00:16:12.974 }, 00:16:12.974 { 00:16:12.974 "name": "BaseBdev2", 00:16:12.974 "uuid": "0dc0939e-bd6b-5351-877f-92bebc20e930", 00:16:12.974 "is_configured": true, 00:16:12.974 "data_offset": 2048, 00:16:12.974 "data_size": 63488 00:16:12.974 }, 00:16:12.974 { 00:16:12.974 "name": "BaseBdev3", 00:16:12.974 "uuid": "0026972d-7c27-57da-862e-988f53cbf4b7", 00:16:12.974 "is_configured": true, 00:16:12.974 "data_offset": 2048, 00:16:12.974 "data_size": 63488 00:16:12.974 }, 00:16:12.974 { 00:16:12.974 "name": "BaseBdev4", 00:16:12.974 "uuid": "e5aac795-c0af-51d6-b5a1-c966f63fd0e5", 00:16:12.974 "is_configured": true, 00:16:12.974 "data_offset": 2048, 00:16:12.974 "data_size": 63488 00:16:12.974 } 00:16:12.974 ] 00:16:12.974 }' 00:16:12.974 21:43:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:12.974 21:43:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:12.974 [2024-12-10 21:43:13.665907] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:16:12.974 I/O size of 3145728 is greater than zero copy threshold (65536). 00:16:12.974 Zero copy mechanism will not be used. 00:16:12.974 Running I/O for 60 seconds... 00:16:13.232 21:43:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:13.232 21:43:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.232 21:43:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:13.232 [2024-12-10 21:43:13.991194] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:13.490 21:43:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.490 21:43:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:16:13.490 [2024-12-10 21:43:14.054610] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:16:13.490 [2024-12-10 21:43:14.056864] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:13.490 [2024-12-10 21:43:14.172916] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:16:13.490 [2024-12-10 21:43:14.174638] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:16:13.748 [2024-12-10 21:43:14.386308] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:16:13.748 [2024-12-10 21:43:14.387231] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:16:14.006 122.00 IOPS, 366.00 MiB/s [2024-12-10T21:43:14.789Z] [2024-12-10 21:43:14.722323] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:16:14.006 [2024-12-10 21:43:14.728877] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:16:14.264 [2024-12-10 21:43:14.967935] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:16:14.521 21:43:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:14.521 21:43:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:14.521 21:43:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:14.521 21:43:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:14.521 21:43:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:14.521 21:43:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:14.521 21:43:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.522 21:43:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:14.522 21:43:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:14.522 21:43:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.522 21:43:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:14.522 "name": "raid_bdev1", 00:16:14.522 "uuid": "65f65ed6-f4d7-4931-86be-7397420671c1", 00:16:14.522 "strip_size_kb": 0, 00:16:14.522 "state": "online", 00:16:14.522 "raid_level": "raid1", 00:16:14.522 "superblock": true, 00:16:14.522 "num_base_bdevs": 4, 00:16:14.522 "num_base_bdevs_discovered": 4, 00:16:14.522 "num_base_bdevs_operational": 4, 00:16:14.522 "process": { 00:16:14.522 "type": "rebuild", 00:16:14.522 "target": "spare", 00:16:14.522 "progress": { 00:16:14.522 "blocks": 10240, 00:16:14.522 "percent": 16 00:16:14.522 } 00:16:14.522 }, 00:16:14.522 "base_bdevs_list": [ 00:16:14.522 { 00:16:14.522 "name": "spare", 00:16:14.522 "uuid": "a5b22551-fa8e-548e-8a8c-e1d8a40f24bf", 00:16:14.522 "is_configured": true, 00:16:14.522 "data_offset": 2048, 00:16:14.522 "data_size": 63488 00:16:14.522 }, 00:16:14.522 { 00:16:14.522 "name": "BaseBdev2", 00:16:14.522 "uuid": "0dc0939e-bd6b-5351-877f-92bebc20e930", 00:16:14.522 "is_configured": true, 00:16:14.522 "data_offset": 2048, 00:16:14.522 "data_size": 63488 00:16:14.522 }, 00:16:14.522 { 00:16:14.522 "name": "BaseBdev3", 00:16:14.522 "uuid": "0026972d-7c27-57da-862e-988f53cbf4b7", 00:16:14.522 "is_configured": true, 00:16:14.522 "data_offset": 2048, 00:16:14.522 "data_size": 63488 00:16:14.522 }, 00:16:14.522 { 00:16:14.522 "name": "BaseBdev4", 00:16:14.522 "uuid": "e5aac795-c0af-51d6-b5a1-c966f63fd0e5", 00:16:14.522 "is_configured": true, 00:16:14.522 "data_offset": 2048, 00:16:14.522 "data_size": 63488 00:16:14.522 } 00:16:14.522 ] 00:16:14.522 }' 00:16:14.522 21:43:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:14.522 21:43:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:14.522 21:43:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:14.522 21:43:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:14.522 21:43:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:14.522 21:43:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.522 21:43:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:14.522 [2024-12-10 21:43:15.206957] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:14.522 [2024-12-10 21:43:15.222366] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:14.522 [2024-12-10 21:43:15.241150] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:14.522 [2024-12-10 21:43:15.241230] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:14.522 [2024-12-10 21:43:15.241249] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:14.522 [2024-12-10 21:43:15.274275] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006220 00:16:14.522 21:43:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.522 21:43:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:16:14.522 21:43:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:14.522 21:43:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:14.522 21:43:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:14.522 21:43:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:14.522 21:43:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:14.522 21:43:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:14.522 21:43:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:14.522 21:43:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:14.522 21:43:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:14.522 21:43:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:14.522 21:43:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.522 21:43:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:14.522 21:43:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:14.779 21:43:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.779 21:43:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:14.779 "name": "raid_bdev1", 00:16:14.779 "uuid": "65f65ed6-f4d7-4931-86be-7397420671c1", 00:16:14.779 "strip_size_kb": 0, 00:16:14.779 "state": "online", 00:16:14.779 "raid_level": "raid1", 00:16:14.779 "superblock": true, 00:16:14.779 "num_base_bdevs": 4, 00:16:14.779 "num_base_bdevs_discovered": 3, 00:16:14.779 "num_base_bdevs_operational": 3, 00:16:14.779 "base_bdevs_list": [ 00:16:14.779 { 00:16:14.779 "name": null, 00:16:14.779 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:14.779 "is_configured": false, 00:16:14.779 "data_offset": 0, 00:16:14.779 "data_size": 63488 00:16:14.779 }, 00:16:14.779 { 00:16:14.779 "name": "BaseBdev2", 00:16:14.779 "uuid": "0dc0939e-bd6b-5351-877f-92bebc20e930", 00:16:14.779 "is_configured": true, 00:16:14.779 "data_offset": 2048, 00:16:14.779 "data_size": 63488 00:16:14.779 }, 00:16:14.779 { 00:16:14.779 "name": "BaseBdev3", 00:16:14.779 "uuid": "0026972d-7c27-57da-862e-988f53cbf4b7", 00:16:14.779 "is_configured": true, 00:16:14.779 "data_offset": 2048, 00:16:14.779 "data_size": 63488 00:16:14.779 }, 00:16:14.779 { 00:16:14.779 "name": "BaseBdev4", 00:16:14.779 "uuid": "e5aac795-c0af-51d6-b5a1-c966f63fd0e5", 00:16:14.779 "is_configured": true, 00:16:14.779 "data_offset": 2048, 00:16:14.779 "data_size": 63488 00:16:14.779 } 00:16:14.779 ] 00:16:14.779 }' 00:16:14.779 21:43:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:14.779 21:43:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:15.037 127.50 IOPS, 382.50 MiB/s [2024-12-10T21:43:15.820Z] 21:43:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:15.037 21:43:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:15.037 21:43:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:15.037 21:43:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:15.037 21:43:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:15.037 21:43:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:15.037 21:43:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:15.037 21:43:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.037 21:43:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:15.037 21:43:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.037 21:43:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:15.037 "name": "raid_bdev1", 00:16:15.037 "uuid": "65f65ed6-f4d7-4931-86be-7397420671c1", 00:16:15.037 "strip_size_kb": 0, 00:16:15.037 "state": "online", 00:16:15.037 "raid_level": "raid1", 00:16:15.037 "superblock": true, 00:16:15.037 "num_base_bdevs": 4, 00:16:15.037 "num_base_bdevs_discovered": 3, 00:16:15.037 "num_base_bdevs_operational": 3, 00:16:15.037 "base_bdevs_list": [ 00:16:15.037 { 00:16:15.037 "name": null, 00:16:15.037 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:15.037 "is_configured": false, 00:16:15.037 "data_offset": 0, 00:16:15.037 "data_size": 63488 00:16:15.037 }, 00:16:15.037 { 00:16:15.037 "name": "BaseBdev2", 00:16:15.037 "uuid": "0dc0939e-bd6b-5351-877f-92bebc20e930", 00:16:15.037 "is_configured": true, 00:16:15.037 "data_offset": 2048, 00:16:15.037 "data_size": 63488 00:16:15.037 }, 00:16:15.037 { 00:16:15.037 "name": "BaseBdev3", 00:16:15.037 "uuid": "0026972d-7c27-57da-862e-988f53cbf4b7", 00:16:15.037 "is_configured": true, 00:16:15.037 "data_offset": 2048, 00:16:15.037 "data_size": 63488 00:16:15.037 }, 00:16:15.037 { 00:16:15.037 "name": "BaseBdev4", 00:16:15.037 "uuid": "e5aac795-c0af-51d6-b5a1-c966f63fd0e5", 00:16:15.037 "is_configured": true, 00:16:15.037 "data_offset": 2048, 00:16:15.037 "data_size": 63488 00:16:15.037 } 00:16:15.037 ] 00:16:15.037 }' 00:16:15.037 21:43:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:15.295 21:43:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:15.295 21:43:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:15.295 21:43:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:15.295 21:43:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:15.295 21:43:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.295 21:43:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:15.295 [2024-12-10 21:43:15.918667] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:15.295 21:43:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.295 21:43:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:16:15.295 [2024-12-10 21:43:15.994797] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:16:15.295 [2024-12-10 21:43:15.996992] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:15.554 [2024-12-10 21:43:16.122579] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:16:15.554 [2024-12-10 21:43:16.124156] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:16:15.812 [2024-12-10 21:43:16.350541] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:16:15.812 [2024-12-10 21:43:16.351471] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:16:16.070 126.00 IOPS, 378.00 MiB/s [2024-12-10T21:43:16.853Z] [2024-12-10 21:43:16.678485] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:16:16.070 [2024-12-10 21:43:16.679186] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:16:16.070 [2024-12-10 21:43:16.798840] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:16:16.070 [2024-12-10 21:43:16.799292] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:16:16.328 21:43:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:16.328 21:43:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:16.328 21:43:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:16.328 21:43:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:16.328 21:43:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:16.328 21:43:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:16.328 21:43:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:16.328 21:43:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.328 21:43:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:16.328 21:43:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.328 21:43:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:16.328 "name": "raid_bdev1", 00:16:16.328 "uuid": "65f65ed6-f4d7-4931-86be-7397420671c1", 00:16:16.328 "strip_size_kb": 0, 00:16:16.328 "state": "online", 00:16:16.328 "raid_level": "raid1", 00:16:16.328 "superblock": true, 00:16:16.328 "num_base_bdevs": 4, 00:16:16.328 "num_base_bdevs_discovered": 4, 00:16:16.328 "num_base_bdevs_operational": 4, 00:16:16.328 "process": { 00:16:16.328 "type": "rebuild", 00:16:16.328 "target": "spare", 00:16:16.328 "progress": { 00:16:16.328 "blocks": 12288, 00:16:16.328 "percent": 19 00:16:16.328 } 00:16:16.328 }, 00:16:16.328 "base_bdevs_list": [ 00:16:16.328 { 00:16:16.328 "name": "spare", 00:16:16.328 "uuid": "a5b22551-fa8e-548e-8a8c-e1d8a40f24bf", 00:16:16.328 "is_configured": true, 00:16:16.328 "data_offset": 2048, 00:16:16.328 "data_size": 63488 00:16:16.328 }, 00:16:16.328 { 00:16:16.328 "name": "BaseBdev2", 00:16:16.328 "uuid": "0dc0939e-bd6b-5351-877f-92bebc20e930", 00:16:16.328 "is_configured": true, 00:16:16.328 "data_offset": 2048, 00:16:16.328 "data_size": 63488 00:16:16.328 }, 00:16:16.328 { 00:16:16.328 "name": "BaseBdev3", 00:16:16.328 "uuid": "0026972d-7c27-57da-862e-988f53cbf4b7", 00:16:16.328 "is_configured": true, 00:16:16.328 "data_offset": 2048, 00:16:16.328 "data_size": 63488 00:16:16.328 }, 00:16:16.328 { 00:16:16.328 "name": "BaseBdev4", 00:16:16.328 "uuid": "e5aac795-c0af-51d6-b5a1-c966f63fd0e5", 00:16:16.328 "is_configured": true, 00:16:16.328 "data_offset": 2048, 00:16:16.328 "data_size": 63488 00:16:16.328 } 00:16:16.328 ] 00:16:16.328 }' 00:16:16.328 21:43:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:16.328 [2024-12-10 21:43:17.056765] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:16:16.328 21:43:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:16.328 21:43:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:16.586 21:43:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:16.586 21:43:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:16:16.586 21:43:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:16:16.586 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:16:16.586 21:43:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:16:16.586 21:43:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:16:16.586 21:43:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:16:16.586 21:43:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:16:16.586 21:43:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.586 21:43:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:16.586 [2024-12-10 21:43:17.135859] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:16.845 [2024-12-10 21:43:17.459412] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:16:16.845 [2024-12-10 21:43:17.459559] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000063c0 00:16:16.845 21:43:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.845 21:43:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:16:16.845 21:43:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:16:16.845 21:43:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:16.845 21:43:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:16.845 21:43:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:16.845 21:43:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:16.845 21:43:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:16.845 21:43:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:16.845 21:43:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:16.845 21:43:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.845 21:43:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:16.845 21:43:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.845 21:43:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:16.845 "name": "raid_bdev1", 00:16:16.845 "uuid": "65f65ed6-f4d7-4931-86be-7397420671c1", 00:16:16.845 "strip_size_kb": 0, 00:16:16.845 "state": "online", 00:16:16.845 "raid_level": "raid1", 00:16:16.845 "superblock": true, 00:16:16.845 "num_base_bdevs": 4, 00:16:16.845 "num_base_bdevs_discovered": 3, 00:16:16.845 "num_base_bdevs_operational": 3, 00:16:16.845 "process": { 00:16:16.845 "type": "rebuild", 00:16:16.845 "target": "spare", 00:16:16.845 "progress": { 00:16:16.845 "blocks": 16384, 00:16:16.845 "percent": 25 00:16:16.845 } 00:16:16.845 }, 00:16:16.845 "base_bdevs_list": [ 00:16:16.845 { 00:16:16.845 "name": "spare", 00:16:16.845 "uuid": "a5b22551-fa8e-548e-8a8c-e1d8a40f24bf", 00:16:16.845 "is_configured": true, 00:16:16.845 "data_offset": 2048, 00:16:16.845 "data_size": 63488 00:16:16.845 }, 00:16:16.845 { 00:16:16.845 "name": null, 00:16:16.845 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:16.845 "is_configured": false, 00:16:16.845 "data_offset": 0, 00:16:16.845 "data_size": 63488 00:16:16.845 }, 00:16:16.845 { 00:16:16.845 "name": "BaseBdev3", 00:16:16.845 "uuid": "0026972d-7c27-57da-862e-988f53cbf4b7", 00:16:16.845 "is_configured": true, 00:16:16.845 "data_offset": 2048, 00:16:16.845 "data_size": 63488 00:16:16.845 }, 00:16:16.845 { 00:16:16.845 "name": "BaseBdev4", 00:16:16.845 "uuid": "e5aac795-c0af-51d6-b5a1-c966f63fd0e5", 00:16:16.845 "is_configured": true, 00:16:16.845 "data_offset": 2048, 00:16:16.845 "data_size": 63488 00:16:16.845 } 00:16:16.845 ] 00:16:16.845 }' 00:16:16.845 21:43:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:16.845 21:43:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:16.845 21:43:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:17.104 21:43:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:17.104 21:43:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=509 00:16:17.104 21:43:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:17.104 21:43:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:17.104 21:43:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:17.104 21:43:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:17.104 21:43:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:17.104 21:43:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:17.104 21:43:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:17.104 21:43:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.104 21:43:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:17.104 21:43:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:17.104 21:43:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.104 113.00 IOPS, 339.00 MiB/s [2024-12-10T21:43:17.887Z] 21:43:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:17.104 "name": "raid_bdev1", 00:16:17.104 "uuid": "65f65ed6-f4d7-4931-86be-7397420671c1", 00:16:17.104 "strip_size_kb": 0, 00:16:17.104 "state": "online", 00:16:17.104 "raid_level": "raid1", 00:16:17.104 "superblock": true, 00:16:17.104 "num_base_bdevs": 4, 00:16:17.104 "num_base_bdevs_discovered": 3, 00:16:17.104 "num_base_bdevs_operational": 3, 00:16:17.104 "process": { 00:16:17.104 "type": "rebuild", 00:16:17.104 "target": "spare", 00:16:17.104 "progress": { 00:16:17.104 "blocks": 18432, 00:16:17.104 "percent": 29 00:16:17.104 } 00:16:17.104 }, 00:16:17.104 "base_bdevs_list": [ 00:16:17.104 { 00:16:17.104 "name": "spare", 00:16:17.104 "uuid": "a5b22551-fa8e-548e-8a8c-e1d8a40f24bf", 00:16:17.104 "is_configured": true, 00:16:17.104 "data_offset": 2048, 00:16:17.104 "data_size": 63488 00:16:17.104 }, 00:16:17.104 { 00:16:17.104 "name": null, 00:16:17.104 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:17.104 "is_configured": false, 00:16:17.104 "data_offset": 0, 00:16:17.104 "data_size": 63488 00:16:17.104 }, 00:16:17.104 { 00:16:17.104 "name": "BaseBdev3", 00:16:17.104 "uuid": "0026972d-7c27-57da-862e-988f53cbf4b7", 00:16:17.104 "is_configured": true, 00:16:17.104 "data_offset": 2048, 00:16:17.104 "data_size": 63488 00:16:17.104 }, 00:16:17.104 { 00:16:17.104 "name": "BaseBdev4", 00:16:17.104 "uuid": "e5aac795-c0af-51d6-b5a1-c966f63fd0e5", 00:16:17.104 "is_configured": true, 00:16:17.104 "data_offset": 2048, 00:16:17.104 "data_size": 63488 00:16:17.104 } 00:16:17.104 ] 00:16:17.104 }' 00:16:17.104 21:43:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:17.104 21:43:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:17.104 21:43:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:17.104 21:43:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:17.104 21:43:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:17.104 [2024-12-10 21:43:17.783518] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:16:18.039 [2024-12-10 21:43:18.582825] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:16:18.039 101.40 IOPS, 304.20 MiB/s [2024-12-10T21:43:18.822Z] 21:43:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:18.039 21:43:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:18.039 21:43:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:18.039 21:43:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:18.039 21:43:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:18.039 21:43:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:18.039 21:43:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:18.039 21:43:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:18.039 21:43:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.039 21:43:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:18.039 21:43:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.297 21:43:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:18.297 "name": "raid_bdev1", 00:16:18.297 "uuid": "65f65ed6-f4d7-4931-86be-7397420671c1", 00:16:18.297 "strip_size_kb": 0, 00:16:18.297 "state": "online", 00:16:18.297 "raid_level": "raid1", 00:16:18.297 "superblock": true, 00:16:18.297 "num_base_bdevs": 4, 00:16:18.297 "num_base_bdevs_discovered": 3, 00:16:18.297 "num_base_bdevs_operational": 3, 00:16:18.297 "process": { 00:16:18.297 "type": "rebuild", 00:16:18.297 "target": "spare", 00:16:18.297 "progress": { 00:16:18.297 "blocks": 34816, 00:16:18.297 "percent": 54 00:16:18.297 } 00:16:18.297 }, 00:16:18.297 "base_bdevs_list": [ 00:16:18.297 { 00:16:18.297 "name": "spare", 00:16:18.297 "uuid": "a5b22551-fa8e-548e-8a8c-e1d8a40f24bf", 00:16:18.297 "is_configured": true, 00:16:18.297 "data_offset": 2048, 00:16:18.297 "data_size": 63488 00:16:18.297 }, 00:16:18.297 { 00:16:18.297 "name": null, 00:16:18.297 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:18.297 "is_configured": false, 00:16:18.297 "data_offset": 0, 00:16:18.297 "data_size": 63488 00:16:18.297 }, 00:16:18.297 { 00:16:18.297 "name": "BaseBdev3", 00:16:18.297 "uuid": "0026972d-7c27-57da-862e-988f53cbf4b7", 00:16:18.297 "is_configured": true, 00:16:18.297 "data_offset": 2048, 00:16:18.297 "data_size": 63488 00:16:18.297 }, 00:16:18.297 { 00:16:18.297 "name": "BaseBdev4", 00:16:18.297 "uuid": "e5aac795-c0af-51d6-b5a1-c966f63fd0e5", 00:16:18.297 "is_configured": true, 00:16:18.297 "data_offset": 2048, 00:16:18.297 "data_size": 63488 00:16:18.297 } 00:16:18.297 ] 00:16:18.297 }' 00:16:18.297 21:43:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:18.297 21:43:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:18.297 21:43:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:18.297 [2024-12-10 21:43:18.927952] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:16:18.297 21:43:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:18.297 21:43:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:18.555 [2024-12-10 21:43:19.156265] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:16:18.813 [2024-12-10 21:43:19.382703] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:16:19.070 [2024-12-10 21:43:19.608133] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:16:19.328 91.50 IOPS, 274.50 MiB/s [2024-12-10T21:43:20.111Z] 21:43:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:19.328 21:43:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:19.328 21:43:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:19.328 21:43:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:19.328 21:43:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:19.328 21:43:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:19.328 21:43:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:19.328 21:43:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:19.328 21:43:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.328 21:43:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:19.328 21:43:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.328 21:43:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:19.328 "name": "raid_bdev1", 00:16:19.328 "uuid": "65f65ed6-f4d7-4931-86be-7397420671c1", 00:16:19.328 "strip_size_kb": 0, 00:16:19.328 "state": "online", 00:16:19.328 "raid_level": "raid1", 00:16:19.328 "superblock": true, 00:16:19.328 "num_base_bdevs": 4, 00:16:19.328 "num_base_bdevs_discovered": 3, 00:16:19.328 "num_base_bdevs_operational": 3, 00:16:19.328 "process": { 00:16:19.328 "type": "rebuild", 00:16:19.328 "target": "spare", 00:16:19.328 "progress": { 00:16:19.328 "blocks": 49152, 00:16:19.328 "percent": 77 00:16:19.328 } 00:16:19.328 }, 00:16:19.328 "base_bdevs_list": [ 00:16:19.328 { 00:16:19.328 "name": "spare", 00:16:19.328 "uuid": "a5b22551-fa8e-548e-8a8c-e1d8a40f24bf", 00:16:19.328 "is_configured": true, 00:16:19.328 "data_offset": 2048, 00:16:19.328 "data_size": 63488 00:16:19.328 }, 00:16:19.328 { 00:16:19.328 "name": null, 00:16:19.328 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:19.328 "is_configured": false, 00:16:19.328 "data_offset": 0, 00:16:19.328 "data_size": 63488 00:16:19.328 }, 00:16:19.328 { 00:16:19.328 "name": "BaseBdev3", 00:16:19.328 "uuid": "0026972d-7c27-57da-862e-988f53cbf4b7", 00:16:19.328 "is_configured": true, 00:16:19.328 "data_offset": 2048, 00:16:19.328 "data_size": 63488 00:16:19.328 }, 00:16:19.328 { 00:16:19.328 "name": "BaseBdev4", 00:16:19.328 "uuid": "e5aac795-c0af-51d6-b5a1-c966f63fd0e5", 00:16:19.328 "is_configured": true, 00:16:19.328 "data_offset": 2048, 00:16:19.328 "data_size": 63488 00:16:19.328 } 00:16:19.328 ] 00:16:19.328 }' 00:16:19.328 21:43:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:19.328 21:43:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:19.328 21:43:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:19.328 21:43:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:19.328 21:43:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:19.587 [2024-12-10 21:43:20.310505] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:16:20.152 84.29 IOPS, 252.86 MiB/s [2024-12-10T21:43:20.935Z] [2024-12-10 21:43:20.748823] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:16:20.152 [2024-12-10 21:43:20.855537] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:16:20.152 [2024-12-10 21:43:20.860278] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:20.410 21:43:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:20.410 21:43:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:20.410 21:43:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:20.410 21:43:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:20.410 21:43:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:20.410 21:43:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:20.410 21:43:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:20.410 21:43:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:20.410 21:43:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.410 21:43:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:20.410 21:43:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.410 21:43:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:20.410 "name": "raid_bdev1", 00:16:20.410 "uuid": "65f65ed6-f4d7-4931-86be-7397420671c1", 00:16:20.410 "strip_size_kb": 0, 00:16:20.410 "state": "online", 00:16:20.410 "raid_level": "raid1", 00:16:20.410 "superblock": true, 00:16:20.410 "num_base_bdevs": 4, 00:16:20.410 "num_base_bdevs_discovered": 3, 00:16:20.410 "num_base_bdevs_operational": 3, 00:16:20.410 "base_bdevs_list": [ 00:16:20.410 { 00:16:20.410 "name": "spare", 00:16:20.410 "uuid": "a5b22551-fa8e-548e-8a8c-e1d8a40f24bf", 00:16:20.410 "is_configured": true, 00:16:20.410 "data_offset": 2048, 00:16:20.410 "data_size": 63488 00:16:20.410 }, 00:16:20.410 { 00:16:20.410 "name": null, 00:16:20.410 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:20.410 "is_configured": false, 00:16:20.410 "data_offset": 0, 00:16:20.410 "data_size": 63488 00:16:20.410 }, 00:16:20.410 { 00:16:20.410 "name": "BaseBdev3", 00:16:20.410 "uuid": "0026972d-7c27-57da-862e-988f53cbf4b7", 00:16:20.410 "is_configured": true, 00:16:20.410 "data_offset": 2048, 00:16:20.410 "data_size": 63488 00:16:20.410 }, 00:16:20.410 { 00:16:20.410 "name": "BaseBdev4", 00:16:20.410 "uuid": "e5aac795-c0af-51d6-b5a1-c966f63fd0e5", 00:16:20.410 "is_configured": true, 00:16:20.410 "data_offset": 2048, 00:16:20.410 "data_size": 63488 00:16:20.410 } 00:16:20.410 ] 00:16:20.410 }' 00:16:20.410 21:43:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:20.673 21:43:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:16:20.673 21:43:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:20.673 21:43:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:16:20.673 21:43:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:16:20.673 21:43:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:20.673 21:43:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:20.673 21:43:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:20.673 21:43:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:20.673 21:43:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:20.673 21:43:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:20.673 21:43:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:20.673 21:43:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.673 21:43:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:20.673 21:43:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.673 21:43:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:20.673 "name": "raid_bdev1", 00:16:20.673 "uuid": "65f65ed6-f4d7-4931-86be-7397420671c1", 00:16:20.673 "strip_size_kb": 0, 00:16:20.673 "state": "online", 00:16:20.673 "raid_level": "raid1", 00:16:20.673 "superblock": true, 00:16:20.673 "num_base_bdevs": 4, 00:16:20.673 "num_base_bdevs_discovered": 3, 00:16:20.673 "num_base_bdevs_operational": 3, 00:16:20.673 "base_bdevs_list": [ 00:16:20.673 { 00:16:20.673 "name": "spare", 00:16:20.673 "uuid": "a5b22551-fa8e-548e-8a8c-e1d8a40f24bf", 00:16:20.673 "is_configured": true, 00:16:20.673 "data_offset": 2048, 00:16:20.673 "data_size": 63488 00:16:20.673 }, 00:16:20.673 { 00:16:20.673 "name": null, 00:16:20.673 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:20.673 "is_configured": false, 00:16:20.673 "data_offset": 0, 00:16:20.673 "data_size": 63488 00:16:20.673 }, 00:16:20.673 { 00:16:20.673 "name": "BaseBdev3", 00:16:20.673 "uuid": "0026972d-7c27-57da-862e-988f53cbf4b7", 00:16:20.673 "is_configured": true, 00:16:20.673 "data_offset": 2048, 00:16:20.673 "data_size": 63488 00:16:20.673 }, 00:16:20.673 { 00:16:20.673 "name": "BaseBdev4", 00:16:20.673 "uuid": "e5aac795-c0af-51d6-b5a1-c966f63fd0e5", 00:16:20.673 "is_configured": true, 00:16:20.673 "data_offset": 2048, 00:16:20.673 "data_size": 63488 00:16:20.673 } 00:16:20.673 ] 00:16:20.673 }' 00:16:20.673 21:43:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:20.673 21:43:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:20.673 21:43:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:20.673 21:43:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:20.673 21:43:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:16:20.673 21:43:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:20.673 21:43:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:20.673 21:43:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:20.673 21:43:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:20.673 21:43:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:20.674 21:43:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:20.674 21:43:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:20.674 21:43:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:20.674 21:43:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:20.674 21:43:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:20.674 21:43:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.674 21:43:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:20.674 21:43:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:20.674 21:43:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.971 21:43:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:20.971 "name": "raid_bdev1", 00:16:20.971 "uuid": "65f65ed6-f4d7-4931-86be-7397420671c1", 00:16:20.971 "strip_size_kb": 0, 00:16:20.971 "state": "online", 00:16:20.971 "raid_level": "raid1", 00:16:20.971 "superblock": true, 00:16:20.971 "num_base_bdevs": 4, 00:16:20.971 "num_base_bdevs_discovered": 3, 00:16:20.971 "num_base_bdevs_operational": 3, 00:16:20.971 "base_bdevs_list": [ 00:16:20.971 { 00:16:20.971 "name": "spare", 00:16:20.971 "uuid": "a5b22551-fa8e-548e-8a8c-e1d8a40f24bf", 00:16:20.971 "is_configured": true, 00:16:20.971 "data_offset": 2048, 00:16:20.971 "data_size": 63488 00:16:20.971 }, 00:16:20.971 { 00:16:20.971 "name": null, 00:16:20.971 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:20.971 "is_configured": false, 00:16:20.971 "data_offset": 0, 00:16:20.971 "data_size": 63488 00:16:20.971 }, 00:16:20.971 { 00:16:20.971 "name": "BaseBdev3", 00:16:20.971 "uuid": "0026972d-7c27-57da-862e-988f53cbf4b7", 00:16:20.971 "is_configured": true, 00:16:20.971 "data_offset": 2048, 00:16:20.971 "data_size": 63488 00:16:20.971 }, 00:16:20.971 { 00:16:20.971 "name": "BaseBdev4", 00:16:20.971 "uuid": "e5aac795-c0af-51d6-b5a1-c966f63fd0e5", 00:16:20.971 "is_configured": true, 00:16:20.971 "data_offset": 2048, 00:16:20.971 "data_size": 63488 00:16:20.971 } 00:16:20.971 ] 00:16:20.971 }' 00:16:20.971 21:43:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:20.971 21:43:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:21.228 77.38 IOPS, 232.12 MiB/s [2024-12-10T21:43:22.011Z] 21:43:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:21.228 21:43:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.228 21:43:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:21.228 [2024-12-10 21:43:21.861106] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:21.228 [2024-12-10 21:43:21.861196] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:21.228 00:16:21.229 Latency(us) 00:16:21.229 [2024-12-10T21:43:22.012Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:21.229 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:16:21.229 raid_bdev1 : 8.30 75.42 226.25 0.00 0.00 18411.26 347.00 119968.08 00:16:21.229 [2024-12-10T21:43:22.012Z] =================================================================================================================== 00:16:21.229 [2024-12-10T21:43:22.012Z] Total : 75.42 226.25 0.00 0.00 18411.26 347.00 119968.08 00:16:21.229 [2024-12-10 21:43:21.977804] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:21.229 [2024-12-10 21:43:21.977954] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:21.229 [2024-12-10 21:43:21.978071] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:21.229 [2024-12-10 21:43:21.978083] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:21.229 { 00:16:21.229 "results": [ 00:16:21.229 { 00:16:21.229 "job": "raid_bdev1", 00:16:21.229 "core_mask": "0x1", 00:16:21.229 "workload": "randrw", 00:16:21.229 "percentage": 50, 00:16:21.229 "status": "finished", 00:16:21.229 "queue_depth": 2, 00:16:21.229 "io_size": 3145728, 00:16:21.229 "runtime": 8.300664, 00:16:21.229 "iops": 75.41565349470838, 00:16:21.229 "mibps": 226.24696048412514, 00:16:21.229 "io_failed": 0, 00:16:21.229 "io_timeout": 0, 00:16:21.229 "avg_latency_us": 18411.264846463997, 00:16:21.229 "min_latency_us": 346.99737991266375, 00:16:21.229 "max_latency_us": 119968.08384279476 00:16:21.229 } 00:16:21.229 ], 00:16:21.229 "core_count": 1 00:16:21.229 } 00:16:21.229 21:43:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.229 21:43:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:16:21.229 21:43:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:21.229 21:43:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.229 21:43:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:21.229 21:43:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.486 21:43:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:16:21.486 21:43:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:16:21.486 21:43:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:16:21.486 21:43:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:16:21.486 21:43:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:21.486 21:43:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:16:21.486 21:43:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:21.486 21:43:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:16:21.486 21:43:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:21.486 21:43:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:16:21.486 21:43:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:21.486 21:43:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:21.486 21:43:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:16:21.486 /dev/nbd0 00:16:21.744 21:43:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:21.744 21:43:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:21.744 21:43:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:21.744 21:43:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:16:21.744 21:43:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:21.744 21:43:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:21.744 21:43:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:21.744 21:43:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:16:21.744 21:43:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:21.744 21:43:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:21.744 21:43:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:21.744 1+0 records in 00:16:21.744 1+0 records out 00:16:21.744 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00042629 s, 9.6 MB/s 00:16:21.744 21:43:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:21.744 21:43:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:16:21.744 21:43:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:21.744 21:43:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:21.744 21:43:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:16:21.744 21:43:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:21.744 21:43:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:21.744 21:43:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:16:21.744 21:43:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:16:21.744 21:43:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@728 -- # continue 00:16:21.744 21:43:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:16:21.744 21:43:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:16:21.744 21:43:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:16:21.744 21:43:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:21.744 21:43:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:16:21.744 21:43:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:21.744 21:43:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:16:21.744 21:43:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:21.744 21:43:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:16:21.744 21:43:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:21.744 21:43:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:21.744 21:43:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:16:22.001 /dev/nbd1 00:16:22.001 21:43:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:22.001 21:43:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:22.001 21:43:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:16:22.001 21:43:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:16:22.001 21:43:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:22.001 21:43:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:22.001 21:43:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:16:22.001 21:43:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:16:22.001 21:43:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:22.001 21:43:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:22.001 21:43:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:22.001 1+0 records in 00:16:22.001 1+0 records out 00:16:22.001 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000602899 s, 6.8 MB/s 00:16:22.001 21:43:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:22.001 21:43:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:16:22.001 21:43:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:22.001 21:43:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:22.001 21:43:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:16:22.001 21:43:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:22.001 21:43:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:22.001 21:43:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:16:22.259 21:43:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:16:22.259 21:43:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:22.259 21:43:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:16:22.259 21:43:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:22.259 21:43:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:16:22.259 21:43:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:22.259 21:43:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:16:22.516 21:43:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:22.516 21:43:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:22.516 21:43:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:22.516 21:43:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:22.516 21:43:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:22.516 21:43:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:22.516 21:43:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:16:22.516 21:43:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:16:22.516 21:43:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:16:22.516 21:43:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:16:22.516 21:43:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:16:22.516 21:43:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:22.516 21:43:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:16:22.516 21:43:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:22.516 21:43:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:16:22.516 21:43:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:22.516 21:43:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:16:22.516 21:43:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:22.516 21:43:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:22.516 21:43:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:16:22.772 /dev/nbd1 00:16:22.772 21:43:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:22.772 21:43:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:22.772 21:43:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:16:22.772 21:43:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:16:22.773 21:43:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:22.773 21:43:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:22.773 21:43:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:16:22.773 21:43:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:16:22.773 21:43:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:22.773 21:43:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:22.773 21:43:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:22.773 1+0 records in 00:16:22.773 1+0 records out 00:16:22.773 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00031317 s, 13.1 MB/s 00:16:22.773 21:43:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:22.773 21:43:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:16:22.773 21:43:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:22.773 21:43:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:22.773 21:43:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:16:22.773 21:43:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:22.773 21:43:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:22.773 21:43:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:16:22.773 21:43:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:16:22.773 21:43:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:22.773 21:43:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:16:22.773 21:43:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:22.773 21:43:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:16:22.773 21:43:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:22.773 21:43:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:16:23.030 21:43:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:23.030 21:43:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:23.030 21:43:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:23.030 21:43:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:23.030 21:43:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:23.030 21:43:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:23.030 21:43:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:16:23.030 21:43:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:16:23.030 21:43:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:16:23.030 21:43:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:23.030 21:43:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:23.030 21:43:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:23.030 21:43:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:16:23.030 21:43:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:23.030 21:43:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:23.288 21:43:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:23.288 21:43:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:23.288 21:43:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:23.288 21:43:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:23.288 21:43:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:23.288 21:43:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:23.288 21:43:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:16:23.288 21:43:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:16:23.288 21:43:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:16:23.288 21:43:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:16:23.288 21:43:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.288 21:43:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:23.288 21:43:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.288 21:43:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:23.288 21:43:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.288 21:43:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:23.288 [2024-12-10 21:43:23.954235] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:23.288 [2024-12-10 21:43:23.954352] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:23.288 [2024-12-10 21:43:23.954400] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:16:23.288 [2024-12-10 21:43:23.954444] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:23.288 [2024-12-10 21:43:23.957030] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:23.288 [2024-12-10 21:43:23.957116] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:23.288 [2024-12-10 21:43:23.957262] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:23.288 [2024-12-10 21:43:23.957342] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:23.288 [2024-12-10 21:43:23.957555] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:23.288 [2024-12-10 21:43:23.957721] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:23.288 spare 00:16:23.288 21:43:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.288 21:43:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:16:23.288 21:43:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.288 21:43:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:23.288 [2024-12-10 21:43:24.057676] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:16:23.288 [2024-12-10 21:43:24.057778] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:23.288 [2024-12-10 21:43:24.058156] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037160 00:16:23.288 [2024-12-10 21:43:24.058444] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:16:23.288 [2024-12-10 21:43:24.058498] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:16:23.288 [2024-12-10 21:43:24.058767] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:23.288 21:43:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.288 21:43:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:16:23.288 21:43:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:23.288 21:43:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:23.288 21:43:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:23.288 21:43:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:23.288 21:43:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:23.288 21:43:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:23.288 21:43:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:23.288 21:43:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:23.288 21:43:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:23.546 21:43:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:23.546 21:43:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:23.546 21:43:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.546 21:43:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:23.546 21:43:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.546 21:43:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:23.546 "name": "raid_bdev1", 00:16:23.546 "uuid": "65f65ed6-f4d7-4931-86be-7397420671c1", 00:16:23.546 "strip_size_kb": 0, 00:16:23.546 "state": "online", 00:16:23.546 "raid_level": "raid1", 00:16:23.546 "superblock": true, 00:16:23.546 "num_base_bdevs": 4, 00:16:23.546 "num_base_bdevs_discovered": 3, 00:16:23.546 "num_base_bdevs_operational": 3, 00:16:23.546 "base_bdevs_list": [ 00:16:23.546 { 00:16:23.546 "name": "spare", 00:16:23.546 "uuid": "a5b22551-fa8e-548e-8a8c-e1d8a40f24bf", 00:16:23.546 "is_configured": true, 00:16:23.546 "data_offset": 2048, 00:16:23.546 "data_size": 63488 00:16:23.546 }, 00:16:23.546 { 00:16:23.546 "name": null, 00:16:23.546 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:23.546 "is_configured": false, 00:16:23.546 "data_offset": 2048, 00:16:23.546 "data_size": 63488 00:16:23.546 }, 00:16:23.546 { 00:16:23.546 "name": "BaseBdev3", 00:16:23.546 "uuid": "0026972d-7c27-57da-862e-988f53cbf4b7", 00:16:23.546 "is_configured": true, 00:16:23.546 "data_offset": 2048, 00:16:23.547 "data_size": 63488 00:16:23.547 }, 00:16:23.547 { 00:16:23.547 "name": "BaseBdev4", 00:16:23.547 "uuid": "e5aac795-c0af-51d6-b5a1-c966f63fd0e5", 00:16:23.547 "is_configured": true, 00:16:23.547 "data_offset": 2048, 00:16:23.547 "data_size": 63488 00:16:23.547 } 00:16:23.547 ] 00:16:23.547 }' 00:16:23.547 21:43:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:23.547 21:43:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:23.805 21:43:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:23.805 21:43:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:23.805 21:43:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:23.805 21:43:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:23.805 21:43:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:23.805 21:43:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:23.805 21:43:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.805 21:43:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:23.805 21:43:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:23.805 21:43:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.063 21:43:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:24.063 "name": "raid_bdev1", 00:16:24.063 "uuid": "65f65ed6-f4d7-4931-86be-7397420671c1", 00:16:24.063 "strip_size_kb": 0, 00:16:24.063 "state": "online", 00:16:24.063 "raid_level": "raid1", 00:16:24.063 "superblock": true, 00:16:24.063 "num_base_bdevs": 4, 00:16:24.063 "num_base_bdevs_discovered": 3, 00:16:24.063 "num_base_bdevs_operational": 3, 00:16:24.063 "base_bdevs_list": [ 00:16:24.063 { 00:16:24.063 "name": "spare", 00:16:24.063 "uuid": "a5b22551-fa8e-548e-8a8c-e1d8a40f24bf", 00:16:24.063 "is_configured": true, 00:16:24.063 "data_offset": 2048, 00:16:24.063 "data_size": 63488 00:16:24.063 }, 00:16:24.063 { 00:16:24.063 "name": null, 00:16:24.063 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:24.063 "is_configured": false, 00:16:24.063 "data_offset": 2048, 00:16:24.063 "data_size": 63488 00:16:24.063 }, 00:16:24.063 { 00:16:24.063 "name": "BaseBdev3", 00:16:24.063 "uuid": "0026972d-7c27-57da-862e-988f53cbf4b7", 00:16:24.063 "is_configured": true, 00:16:24.063 "data_offset": 2048, 00:16:24.063 "data_size": 63488 00:16:24.063 }, 00:16:24.063 { 00:16:24.063 "name": "BaseBdev4", 00:16:24.063 "uuid": "e5aac795-c0af-51d6-b5a1-c966f63fd0e5", 00:16:24.063 "is_configured": true, 00:16:24.063 "data_offset": 2048, 00:16:24.063 "data_size": 63488 00:16:24.063 } 00:16:24.063 ] 00:16:24.063 }' 00:16:24.063 21:43:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:24.063 21:43:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:24.063 21:43:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:24.063 21:43:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:24.063 21:43:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:24.063 21:43:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.063 21:43:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:24.063 21:43:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:16:24.063 21:43:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.063 21:43:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:16:24.063 21:43:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:24.063 21:43:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.063 21:43:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:24.063 [2024-12-10 21:43:24.753696] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:24.063 21:43:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.063 21:43:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:24.063 21:43:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:24.064 21:43:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:24.064 21:43:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:24.064 21:43:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:24.064 21:43:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:24.064 21:43:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:24.064 21:43:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:24.064 21:43:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:24.064 21:43:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:24.064 21:43:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:24.064 21:43:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.064 21:43:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:24.064 21:43:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:24.064 21:43:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.064 21:43:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:24.064 "name": "raid_bdev1", 00:16:24.064 "uuid": "65f65ed6-f4d7-4931-86be-7397420671c1", 00:16:24.064 "strip_size_kb": 0, 00:16:24.064 "state": "online", 00:16:24.064 "raid_level": "raid1", 00:16:24.064 "superblock": true, 00:16:24.064 "num_base_bdevs": 4, 00:16:24.064 "num_base_bdevs_discovered": 2, 00:16:24.064 "num_base_bdevs_operational": 2, 00:16:24.064 "base_bdevs_list": [ 00:16:24.064 { 00:16:24.064 "name": null, 00:16:24.064 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:24.064 "is_configured": false, 00:16:24.064 "data_offset": 0, 00:16:24.064 "data_size": 63488 00:16:24.064 }, 00:16:24.064 { 00:16:24.064 "name": null, 00:16:24.064 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:24.064 "is_configured": false, 00:16:24.064 "data_offset": 2048, 00:16:24.064 "data_size": 63488 00:16:24.064 }, 00:16:24.064 { 00:16:24.064 "name": "BaseBdev3", 00:16:24.064 "uuid": "0026972d-7c27-57da-862e-988f53cbf4b7", 00:16:24.064 "is_configured": true, 00:16:24.064 "data_offset": 2048, 00:16:24.064 "data_size": 63488 00:16:24.064 }, 00:16:24.064 { 00:16:24.064 "name": "BaseBdev4", 00:16:24.064 "uuid": "e5aac795-c0af-51d6-b5a1-c966f63fd0e5", 00:16:24.064 "is_configured": true, 00:16:24.064 "data_offset": 2048, 00:16:24.064 "data_size": 63488 00:16:24.064 } 00:16:24.064 ] 00:16:24.064 }' 00:16:24.064 21:43:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:24.064 21:43:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:24.631 21:43:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:24.631 21:43:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.631 21:43:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:24.631 [2024-12-10 21:43:25.232982] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:24.631 [2024-12-10 21:43:25.233296] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:16:24.631 [2024-12-10 21:43:25.233351] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:24.631 [2024-12-10 21:43:25.233395] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:24.631 [2024-12-10 21:43:25.250053] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037230 00:16:24.631 21:43:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.631 21:43:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:16:24.631 [2024-12-10 21:43:25.252159] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:25.564 21:43:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:25.564 21:43:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:25.564 21:43:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:25.564 21:43:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:25.564 21:43:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:25.564 21:43:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:25.564 21:43:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:25.564 21:43:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.564 21:43:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:25.564 21:43:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.564 21:43:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:25.564 "name": "raid_bdev1", 00:16:25.564 "uuid": "65f65ed6-f4d7-4931-86be-7397420671c1", 00:16:25.564 "strip_size_kb": 0, 00:16:25.564 "state": "online", 00:16:25.564 "raid_level": "raid1", 00:16:25.564 "superblock": true, 00:16:25.564 "num_base_bdevs": 4, 00:16:25.564 "num_base_bdevs_discovered": 3, 00:16:25.564 "num_base_bdevs_operational": 3, 00:16:25.564 "process": { 00:16:25.564 "type": "rebuild", 00:16:25.564 "target": "spare", 00:16:25.564 "progress": { 00:16:25.564 "blocks": 20480, 00:16:25.564 "percent": 32 00:16:25.564 } 00:16:25.564 }, 00:16:25.564 "base_bdevs_list": [ 00:16:25.564 { 00:16:25.564 "name": "spare", 00:16:25.564 "uuid": "a5b22551-fa8e-548e-8a8c-e1d8a40f24bf", 00:16:25.564 "is_configured": true, 00:16:25.564 "data_offset": 2048, 00:16:25.564 "data_size": 63488 00:16:25.564 }, 00:16:25.564 { 00:16:25.564 "name": null, 00:16:25.564 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:25.564 "is_configured": false, 00:16:25.564 "data_offset": 2048, 00:16:25.564 "data_size": 63488 00:16:25.564 }, 00:16:25.564 { 00:16:25.564 "name": "BaseBdev3", 00:16:25.564 "uuid": "0026972d-7c27-57da-862e-988f53cbf4b7", 00:16:25.564 "is_configured": true, 00:16:25.564 "data_offset": 2048, 00:16:25.564 "data_size": 63488 00:16:25.564 }, 00:16:25.564 { 00:16:25.564 "name": "BaseBdev4", 00:16:25.564 "uuid": "e5aac795-c0af-51d6-b5a1-c966f63fd0e5", 00:16:25.564 "is_configured": true, 00:16:25.564 "data_offset": 2048, 00:16:25.564 "data_size": 63488 00:16:25.564 } 00:16:25.564 ] 00:16:25.564 }' 00:16:25.564 21:43:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:25.823 21:43:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:25.823 21:43:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:25.823 21:43:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:25.823 21:43:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:16:25.823 21:43:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.823 21:43:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:25.823 [2024-12-10 21:43:26.404007] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:25.823 [2024-12-10 21:43:26.458106] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:25.823 [2024-12-10 21:43:26.458286] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:25.823 [2024-12-10 21:43:26.458329] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:25.823 [2024-12-10 21:43:26.458351] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:25.823 21:43:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.823 21:43:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:25.823 21:43:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:25.823 21:43:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:25.823 21:43:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:25.823 21:43:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:25.823 21:43:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:25.823 21:43:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:25.823 21:43:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:25.823 21:43:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:25.823 21:43:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:25.823 21:43:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:25.823 21:43:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:25.823 21:43:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.823 21:43:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:25.823 21:43:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.823 21:43:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:25.823 "name": "raid_bdev1", 00:16:25.823 "uuid": "65f65ed6-f4d7-4931-86be-7397420671c1", 00:16:25.823 "strip_size_kb": 0, 00:16:25.823 "state": "online", 00:16:25.823 "raid_level": "raid1", 00:16:25.823 "superblock": true, 00:16:25.823 "num_base_bdevs": 4, 00:16:25.823 "num_base_bdevs_discovered": 2, 00:16:25.823 "num_base_bdevs_operational": 2, 00:16:25.823 "base_bdevs_list": [ 00:16:25.823 { 00:16:25.823 "name": null, 00:16:25.823 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:25.823 "is_configured": false, 00:16:25.823 "data_offset": 0, 00:16:25.823 "data_size": 63488 00:16:25.823 }, 00:16:25.823 { 00:16:25.823 "name": null, 00:16:25.823 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:25.823 "is_configured": false, 00:16:25.823 "data_offset": 2048, 00:16:25.823 "data_size": 63488 00:16:25.823 }, 00:16:25.823 { 00:16:25.823 "name": "BaseBdev3", 00:16:25.823 "uuid": "0026972d-7c27-57da-862e-988f53cbf4b7", 00:16:25.823 "is_configured": true, 00:16:25.823 "data_offset": 2048, 00:16:25.823 "data_size": 63488 00:16:25.823 }, 00:16:25.823 { 00:16:25.823 "name": "BaseBdev4", 00:16:25.823 "uuid": "e5aac795-c0af-51d6-b5a1-c966f63fd0e5", 00:16:25.823 "is_configured": true, 00:16:25.823 "data_offset": 2048, 00:16:25.823 "data_size": 63488 00:16:25.823 } 00:16:25.823 ] 00:16:25.823 }' 00:16:25.823 21:43:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:25.823 21:43:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:26.391 21:43:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:26.391 21:43:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.391 21:43:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:26.391 [2024-12-10 21:43:26.946222] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:26.391 [2024-12-10 21:43:26.946356] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:26.391 [2024-12-10 21:43:26.946415] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:16:26.391 [2024-12-10 21:43:26.946466] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:26.391 [2024-12-10 21:43:26.947022] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:26.391 [2024-12-10 21:43:26.947092] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:26.391 [2024-12-10 21:43:26.947236] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:26.391 [2024-12-10 21:43:26.947280] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:16:26.391 [2024-12-10 21:43:26.947330] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:26.391 [2024-12-10 21:43:26.947382] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:26.391 [2024-12-10 21:43:26.962609] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037300 00:16:26.391 spare 00:16:26.391 21:43:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.391 21:43:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:16:26.391 [2024-12-10 21:43:26.964677] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:27.325 21:43:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:27.325 21:43:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:27.325 21:43:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:27.325 21:43:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:27.325 21:43:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:27.325 21:43:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:27.325 21:43:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:27.325 21:43:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.325 21:43:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:27.325 21:43:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.325 21:43:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:27.325 "name": "raid_bdev1", 00:16:27.325 "uuid": "65f65ed6-f4d7-4931-86be-7397420671c1", 00:16:27.325 "strip_size_kb": 0, 00:16:27.325 "state": "online", 00:16:27.325 "raid_level": "raid1", 00:16:27.325 "superblock": true, 00:16:27.325 "num_base_bdevs": 4, 00:16:27.325 "num_base_bdevs_discovered": 3, 00:16:27.325 "num_base_bdevs_operational": 3, 00:16:27.325 "process": { 00:16:27.325 "type": "rebuild", 00:16:27.325 "target": "spare", 00:16:27.325 "progress": { 00:16:27.325 "blocks": 20480, 00:16:27.325 "percent": 32 00:16:27.325 } 00:16:27.325 }, 00:16:27.325 "base_bdevs_list": [ 00:16:27.325 { 00:16:27.325 "name": "spare", 00:16:27.325 "uuid": "a5b22551-fa8e-548e-8a8c-e1d8a40f24bf", 00:16:27.325 "is_configured": true, 00:16:27.325 "data_offset": 2048, 00:16:27.325 "data_size": 63488 00:16:27.325 }, 00:16:27.325 { 00:16:27.325 "name": null, 00:16:27.325 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:27.325 "is_configured": false, 00:16:27.325 "data_offset": 2048, 00:16:27.325 "data_size": 63488 00:16:27.325 }, 00:16:27.325 { 00:16:27.325 "name": "BaseBdev3", 00:16:27.325 "uuid": "0026972d-7c27-57da-862e-988f53cbf4b7", 00:16:27.325 "is_configured": true, 00:16:27.325 "data_offset": 2048, 00:16:27.325 "data_size": 63488 00:16:27.325 }, 00:16:27.325 { 00:16:27.325 "name": "BaseBdev4", 00:16:27.325 "uuid": "e5aac795-c0af-51d6-b5a1-c966f63fd0e5", 00:16:27.325 "is_configured": true, 00:16:27.325 "data_offset": 2048, 00:16:27.325 "data_size": 63488 00:16:27.325 } 00:16:27.325 ] 00:16:27.325 }' 00:16:27.325 21:43:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:27.325 21:43:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:27.325 21:43:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:27.584 21:43:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:27.584 21:43:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:16:27.584 21:43:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.584 21:43:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:27.584 [2024-12-10 21:43:28.124300] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:27.584 [2024-12-10 21:43:28.170557] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:27.584 [2024-12-10 21:43:28.170738] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:27.584 [2024-12-10 21:43:28.170800] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:27.584 [2024-12-10 21:43:28.170828] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:27.584 21:43:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.584 21:43:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:27.584 21:43:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:27.584 21:43:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:27.584 21:43:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:27.584 21:43:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:27.584 21:43:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:27.584 21:43:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:27.584 21:43:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:27.584 21:43:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:27.584 21:43:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:27.584 21:43:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:27.584 21:43:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:27.584 21:43:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.584 21:43:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:27.584 21:43:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.584 21:43:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:27.584 "name": "raid_bdev1", 00:16:27.584 "uuid": "65f65ed6-f4d7-4931-86be-7397420671c1", 00:16:27.584 "strip_size_kb": 0, 00:16:27.584 "state": "online", 00:16:27.584 "raid_level": "raid1", 00:16:27.584 "superblock": true, 00:16:27.584 "num_base_bdevs": 4, 00:16:27.584 "num_base_bdevs_discovered": 2, 00:16:27.584 "num_base_bdevs_operational": 2, 00:16:27.584 "base_bdevs_list": [ 00:16:27.584 { 00:16:27.584 "name": null, 00:16:27.584 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:27.584 "is_configured": false, 00:16:27.584 "data_offset": 0, 00:16:27.584 "data_size": 63488 00:16:27.584 }, 00:16:27.584 { 00:16:27.584 "name": null, 00:16:27.584 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:27.584 "is_configured": false, 00:16:27.584 "data_offset": 2048, 00:16:27.584 "data_size": 63488 00:16:27.584 }, 00:16:27.584 { 00:16:27.584 "name": "BaseBdev3", 00:16:27.584 "uuid": "0026972d-7c27-57da-862e-988f53cbf4b7", 00:16:27.584 "is_configured": true, 00:16:27.584 "data_offset": 2048, 00:16:27.584 "data_size": 63488 00:16:27.584 }, 00:16:27.584 { 00:16:27.584 "name": "BaseBdev4", 00:16:27.584 "uuid": "e5aac795-c0af-51d6-b5a1-c966f63fd0e5", 00:16:27.584 "is_configured": true, 00:16:27.584 "data_offset": 2048, 00:16:27.584 "data_size": 63488 00:16:27.584 } 00:16:27.584 ] 00:16:27.584 }' 00:16:27.584 21:43:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:27.584 21:43:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:27.843 21:43:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:27.843 21:43:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:27.843 21:43:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:27.843 21:43:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:27.843 21:43:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:27.843 21:43:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:27.843 21:43:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.843 21:43:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:27.843 21:43:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:27.843 21:43:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.101 21:43:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:28.101 "name": "raid_bdev1", 00:16:28.101 "uuid": "65f65ed6-f4d7-4931-86be-7397420671c1", 00:16:28.101 "strip_size_kb": 0, 00:16:28.101 "state": "online", 00:16:28.101 "raid_level": "raid1", 00:16:28.101 "superblock": true, 00:16:28.101 "num_base_bdevs": 4, 00:16:28.101 "num_base_bdevs_discovered": 2, 00:16:28.101 "num_base_bdevs_operational": 2, 00:16:28.101 "base_bdevs_list": [ 00:16:28.101 { 00:16:28.101 "name": null, 00:16:28.101 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:28.101 "is_configured": false, 00:16:28.101 "data_offset": 0, 00:16:28.101 "data_size": 63488 00:16:28.101 }, 00:16:28.101 { 00:16:28.101 "name": null, 00:16:28.101 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:28.101 "is_configured": false, 00:16:28.101 "data_offset": 2048, 00:16:28.101 "data_size": 63488 00:16:28.101 }, 00:16:28.101 { 00:16:28.101 "name": "BaseBdev3", 00:16:28.101 "uuid": "0026972d-7c27-57da-862e-988f53cbf4b7", 00:16:28.101 "is_configured": true, 00:16:28.101 "data_offset": 2048, 00:16:28.101 "data_size": 63488 00:16:28.101 }, 00:16:28.101 { 00:16:28.101 "name": "BaseBdev4", 00:16:28.101 "uuid": "e5aac795-c0af-51d6-b5a1-c966f63fd0e5", 00:16:28.101 "is_configured": true, 00:16:28.101 "data_offset": 2048, 00:16:28.101 "data_size": 63488 00:16:28.101 } 00:16:28.101 ] 00:16:28.101 }' 00:16:28.101 21:43:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:28.101 21:43:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:28.101 21:43:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:28.101 21:43:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:28.101 21:43:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:16:28.101 21:43:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.101 21:43:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:28.101 21:43:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.101 21:43:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:28.101 21:43:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.101 21:43:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:28.101 [2024-12-10 21:43:28.773913] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:28.101 [2024-12-10 21:43:28.774035] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:28.101 [2024-12-10 21:43:28.774062] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000cc80 00:16:28.101 [2024-12-10 21:43:28.774075] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:28.101 [2024-12-10 21:43:28.774601] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:28.101 [2024-12-10 21:43:28.774634] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:28.101 [2024-12-10 21:43:28.774730] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:16:28.101 [2024-12-10 21:43:28.774753] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:16:28.101 [2024-12-10 21:43:28.774761] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:28.101 [2024-12-10 21:43:28.774774] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:16:28.101 BaseBdev1 00:16:28.101 21:43:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.101 21:43:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:16:29.036 21:43:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:29.036 21:43:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:29.036 21:43:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:29.036 21:43:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:29.036 21:43:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:29.036 21:43:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:29.036 21:43:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:29.036 21:43:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:29.036 21:43:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:29.036 21:43:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:29.036 21:43:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:29.036 21:43:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.036 21:43:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:29.036 21:43:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:29.036 21:43:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.299 21:43:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:29.299 "name": "raid_bdev1", 00:16:29.299 "uuid": "65f65ed6-f4d7-4931-86be-7397420671c1", 00:16:29.299 "strip_size_kb": 0, 00:16:29.299 "state": "online", 00:16:29.299 "raid_level": "raid1", 00:16:29.299 "superblock": true, 00:16:29.299 "num_base_bdevs": 4, 00:16:29.299 "num_base_bdevs_discovered": 2, 00:16:29.299 "num_base_bdevs_operational": 2, 00:16:29.299 "base_bdevs_list": [ 00:16:29.299 { 00:16:29.299 "name": null, 00:16:29.299 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:29.299 "is_configured": false, 00:16:29.299 "data_offset": 0, 00:16:29.299 "data_size": 63488 00:16:29.299 }, 00:16:29.299 { 00:16:29.299 "name": null, 00:16:29.299 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:29.299 "is_configured": false, 00:16:29.299 "data_offset": 2048, 00:16:29.299 "data_size": 63488 00:16:29.299 }, 00:16:29.299 { 00:16:29.299 "name": "BaseBdev3", 00:16:29.299 "uuid": "0026972d-7c27-57da-862e-988f53cbf4b7", 00:16:29.299 "is_configured": true, 00:16:29.299 "data_offset": 2048, 00:16:29.299 "data_size": 63488 00:16:29.299 }, 00:16:29.299 { 00:16:29.299 "name": "BaseBdev4", 00:16:29.299 "uuid": "e5aac795-c0af-51d6-b5a1-c966f63fd0e5", 00:16:29.299 "is_configured": true, 00:16:29.299 "data_offset": 2048, 00:16:29.299 "data_size": 63488 00:16:29.299 } 00:16:29.299 ] 00:16:29.299 }' 00:16:29.299 21:43:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:29.299 21:43:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:29.563 21:43:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:29.563 21:43:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:29.563 21:43:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:29.563 21:43:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:29.563 21:43:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:29.563 21:43:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:29.563 21:43:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:29.563 21:43:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.563 21:43:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:29.563 21:43:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.563 21:43:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:29.563 "name": "raid_bdev1", 00:16:29.563 "uuid": "65f65ed6-f4d7-4931-86be-7397420671c1", 00:16:29.563 "strip_size_kb": 0, 00:16:29.563 "state": "online", 00:16:29.563 "raid_level": "raid1", 00:16:29.563 "superblock": true, 00:16:29.563 "num_base_bdevs": 4, 00:16:29.563 "num_base_bdevs_discovered": 2, 00:16:29.563 "num_base_bdevs_operational": 2, 00:16:29.563 "base_bdevs_list": [ 00:16:29.563 { 00:16:29.563 "name": null, 00:16:29.563 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:29.563 "is_configured": false, 00:16:29.563 "data_offset": 0, 00:16:29.563 "data_size": 63488 00:16:29.563 }, 00:16:29.563 { 00:16:29.563 "name": null, 00:16:29.563 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:29.563 "is_configured": false, 00:16:29.563 "data_offset": 2048, 00:16:29.563 "data_size": 63488 00:16:29.563 }, 00:16:29.563 { 00:16:29.563 "name": "BaseBdev3", 00:16:29.563 "uuid": "0026972d-7c27-57da-862e-988f53cbf4b7", 00:16:29.563 "is_configured": true, 00:16:29.563 "data_offset": 2048, 00:16:29.563 "data_size": 63488 00:16:29.563 }, 00:16:29.563 { 00:16:29.563 "name": "BaseBdev4", 00:16:29.563 "uuid": "e5aac795-c0af-51d6-b5a1-c966f63fd0e5", 00:16:29.563 "is_configured": true, 00:16:29.563 "data_offset": 2048, 00:16:29.563 "data_size": 63488 00:16:29.563 } 00:16:29.563 ] 00:16:29.563 }' 00:16:29.563 21:43:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:29.821 21:43:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:29.821 21:43:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:29.821 21:43:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:29.821 21:43:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:29.821 21:43:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # local es=0 00:16:29.821 21:43:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:29.821 21:43:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:16:29.821 21:43:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:29.821 21:43:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:16:29.821 21:43:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:29.821 21:43:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:29.821 21:43:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.821 21:43:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:29.821 [2024-12-10 21:43:30.383529] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:29.821 [2024-12-10 21:43:30.383759] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:16:29.821 [2024-12-10 21:43:30.383778] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:29.821 request: 00:16:29.821 { 00:16:29.821 "base_bdev": "BaseBdev1", 00:16:29.821 "raid_bdev": "raid_bdev1", 00:16:29.821 "method": "bdev_raid_add_base_bdev", 00:16:29.821 "req_id": 1 00:16:29.821 } 00:16:29.821 Got JSON-RPC error response 00:16:29.821 response: 00:16:29.821 { 00:16:29.821 "code": -22, 00:16:29.821 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:16:29.821 } 00:16:29.821 21:43:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:16:29.821 21:43:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # es=1 00:16:29.821 21:43:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:29.821 21:43:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:29.821 21:43:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:29.821 21:43:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:16:30.758 21:43:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:30.758 21:43:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:30.758 21:43:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:30.758 21:43:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:30.758 21:43:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:30.758 21:43:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:30.758 21:43:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:30.758 21:43:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:30.758 21:43:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:30.758 21:43:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:30.758 21:43:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:30.758 21:43:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:30.758 21:43:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.758 21:43:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:30.758 21:43:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.758 21:43:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:30.758 "name": "raid_bdev1", 00:16:30.758 "uuid": "65f65ed6-f4d7-4931-86be-7397420671c1", 00:16:30.758 "strip_size_kb": 0, 00:16:30.758 "state": "online", 00:16:30.758 "raid_level": "raid1", 00:16:30.758 "superblock": true, 00:16:30.758 "num_base_bdevs": 4, 00:16:30.758 "num_base_bdevs_discovered": 2, 00:16:30.758 "num_base_bdevs_operational": 2, 00:16:30.758 "base_bdevs_list": [ 00:16:30.758 { 00:16:30.758 "name": null, 00:16:30.758 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:30.758 "is_configured": false, 00:16:30.758 "data_offset": 0, 00:16:30.758 "data_size": 63488 00:16:30.758 }, 00:16:30.758 { 00:16:30.758 "name": null, 00:16:30.758 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:30.758 "is_configured": false, 00:16:30.758 "data_offset": 2048, 00:16:30.758 "data_size": 63488 00:16:30.758 }, 00:16:30.758 { 00:16:30.758 "name": "BaseBdev3", 00:16:30.758 "uuid": "0026972d-7c27-57da-862e-988f53cbf4b7", 00:16:30.758 "is_configured": true, 00:16:30.758 "data_offset": 2048, 00:16:30.758 "data_size": 63488 00:16:30.758 }, 00:16:30.758 { 00:16:30.758 "name": "BaseBdev4", 00:16:30.758 "uuid": "e5aac795-c0af-51d6-b5a1-c966f63fd0e5", 00:16:30.758 "is_configured": true, 00:16:30.758 "data_offset": 2048, 00:16:30.758 "data_size": 63488 00:16:30.758 } 00:16:30.758 ] 00:16:30.758 }' 00:16:30.758 21:43:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:30.758 21:43:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:31.325 21:43:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:31.325 21:43:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:31.325 21:43:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:31.325 21:43:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:31.325 21:43:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:31.325 21:43:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:31.325 21:43:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:31.325 21:43:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.325 21:43:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:31.325 21:43:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.325 21:43:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:31.325 "name": "raid_bdev1", 00:16:31.325 "uuid": "65f65ed6-f4d7-4931-86be-7397420671c1", 00:16:31.325 "strip_size_kb": 0, 00:16:31.325 "state": "online", 00:16:31.325 "raid_level": "raid1", 00:16:31.325 "superblock": true, 00:16:31.325 "num_base_bdevs": 4, 00:16:31.325 "num_base_bdevs_discovered": 2, 00:16:31.325 "num_base_bdevs_operational": 2, 00:16:31.325 "base_bdevs_list": [ 00:16:31.325 { 00:16:31.325 "name": null, 00:16:31.325 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:31.325 "is_configured": false, 00:16:31.325 "data_offset": 0, 00:16:31.325 "data_size": 63488 00:16:31.325 }, 00:16:31.325 { 00:16:31.325 "name": null, 00:16:31.325 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:31.325 "is_configured": false, 00:16:31.325 "data_offset": 2048, 00:16:31.325 "data_size": 63488 00:16:31.325 }, 00:16:31.325 { 00:16:31.325 "name": "BaseBdev3", 00:16:31.325 "uuid": "0026972d-7c27-57da-862e-988f53cbf4b7", 00:16:31.325 "is_configured": true, 00:16:31.325 "data_offset": 2048, 00:16:31.325 "data_size": 63488 00:16:31.325 }, 00:16:31.325 { 00:16:31.325 "name": "BaseBdev4", 00:16:31.325 "uuid": "e5aac795-c0af-51d6-b5a1-c966f63fd0e5", 00:16:31.325 "is_configured": true, 00:16:31.325 "data_offset": 2048, 00:16:31.325 "data_size": 63488 00:16:31.325 } 00:16:31.325 ] 00:16:31.325 }' 00:16:31.325 21:43:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:31.325 21:43:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:31.325 21:43:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:31.325 21:43:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:31.325 21:43:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 79334 00:16:31.325 21:43:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # '[' -z 79334 ']' 00:16:31.325 21:43:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # kill -0 79334 00:16:31.325 21:43:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # uname 00:16:31.325 21:43:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:31.325 21:43:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79334 00:16:31.325 21:43:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:31.325 21:43:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:31.325 killing process with pid 79334 00:16:31.325 Received shutdown signal, test time was about 18.385798 seconds 00:16:31.325 00:16:31.326 Latency(us) 00:16:31.326 [2024-12-10T21:43:32.109Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:31.326 [2024-12-10T21:43:32.109Z] =================================================================================================================== 00:16:31.326 [2024-12-10T21:43:32.109Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:31.326 21:43:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79334' 00:16:31.326 21:43:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@973 -- # kill 79334 00:16:31.326 [2024-12-10 21:43:32.018602] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:31.326 [2024-12-10 21:43:32.018735] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:31.326 21:43:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@978 -- # wait 79334 00:16:31.326 [2024-12-10 21:43:32.018813] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:31.326 [2024-12-10 21:43:32.018823] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:16:31.893 [2024-12-10 21:43:32.475551] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:33.268 21:43:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:16:33.268 ************************************ 00:16:33.268 END TEST raid_rebuild_test_sb_io 00:16:33.268 ************************************ 00:16:33.268 00:16:33.268 real 0m22.071s 00:16:33.268 user 0m29.024s 00:16:33.268 sys 0m2.570s 00:16:33.268 21:43:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:33.268 21:43:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:33.268 21:43:33 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:16:33.268 21:43:33 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 3 false 00:16:33.268 21:43:33 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:16:33.268 21:43:33 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:33.268 21:43:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:33.268 ************************************ 00:16:33.268 START TEST raid5f_state_function_test 00:16:33.268 ************************************ 00:16:33.268 21:43:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 3 false 00:16:33.268 21:43:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:16:33.268 21:43:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:16:33.268 21:43:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:16:33.268 21:43:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:16:33.268 21:43:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:16:33.268 21:43:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:33.268 21:43:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:16:33.268 21:43:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:33.268 21:43:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:33.268 21:43:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:16:33.268 21:43:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:33.268 21:43:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:33.268 21:43:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:16:33.268 21:43:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:33.268 21:43:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:33.268 21:43:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:16:33.268 21:43:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:16:33.268 21:43:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:16:33.268 21:43:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:16:33.268 21:43:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:16:33.268 21:43:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:16:33.268 21:43:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:16:33.268 21:43:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:16:33.268 21:43:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:16:33.268 21:43:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:16:33.268 21:43:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:16:33.268 21:43:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=80057 00:16:33.268 21:43:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:16:33.268 21:43:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 80057' 00:16:33.268 Process raid pid: 80057 00:16:33.268 21:43:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 80057 00:16:33.268 21:43:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 80057 ']' 00:16:33.268 21:43:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:33.268 21:43:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:33.268 21:43:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:33.268 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:33.268 21:43:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:33.268 21:43:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.268 [2024-12-10 21:43:33.917918] Starting SPDK v25.01-pre git sha1 cec5ba284 / DPDK 24.03.0 initialization... 00:16:33.268 [2024-12-10 21:43:33.918107] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:33.526 [2024-12-10 21:43:34.079103] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:33.526 [2024-12-10 21:43:34.203322] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:16:33.784 [2024-12-10 21:43:34.422759] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:33.784 [2024-12-10 21:43:34.422884] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:34.043 21:43:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:34.043 21:43:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:16:34.043 21:43:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:16:34.043 21:43:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.043 21:43:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:34.043 [2024-12-10 21:43:34.781388] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:34.043 [2024-12-10 21:43:34.781512] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:34.043 [2024-12-10 21:43:34.781569] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:34.043 [2024-12-10 21:43:34.781598] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:34.043 [2024-12-10 21:43:34.781621] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:34.043 [2024-12-10 21:43:34.781654] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:34.043 21:43:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.043 21:43:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:34.043 21:43:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:34.043 21:43:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:34.043 21:43:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:34.043 21:43:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:34.043 21:43:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:34.043 21:43:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:34.043 21:43:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:34.043 21:43:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:34.043 21:43:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:34.043 21:43:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:34.043 21:43:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:34.043 21:43:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.043 21:43:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:34.043 21:43:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.301 21:43:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:34.301 "name": "Existed_Raid", 00:16:34.301 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:34.301 "strip_size_kb": 64, 00:16:34.301 "state": "configuring", 00:16:34.301 "raid_level": "raid5f", 00:16:34.301 "superblock": false, 00:16:34.301 "num_base_bdevs": 3, 00:16:34.301 "num_base_bdevs_discovered": 0, 00:16:34.301 "num_base_bdevs_operational": 3, 00:16:34.301 "base_bdevs_list": [ 00:16:34.301 { 00:16:34.301 "name": "BaseBdev1", 00:16:34.301 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:34.301 "is_configured": false, 00:16:34.301 "data_offset": 0, 00:16:34.301 "data_size": 0 00:16:34.301 }, 00:16:34.301 { 00:16:34.301 "name": "BaseBdev2", 00:16:34.301 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:34.301 "is_configured": false, 00:16:34.301 "data_offset": 0, 00:16:34.301 "data_size": 0 00:16:34.301 }, 00:16:34.301 { 00:16:34.301 "name": "BaseBdev3", 00:16:34.301 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:34.301 "is_configured": false, 00:16:34.301 "data_offset": 0, 00:16:34.301 "data_size": 0 00:16:34.301 } 00:16:34.301 ] 00:16:34.301 }' 00:16:34.301 21:43:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:34.301 21:43:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:34.559 21:43:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:34.559 21:43:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.559 21:43:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:34.559 [2024-12-10 21:43:35.264548] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:34.559 [2024-12-10 21:43:35.264588] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:16:34.559 21:43:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.559 21:43:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:16:34.559 21:43:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.559 21:43:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:34.559 [2024-12-10 21:43:35.276521] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:34.559 [2024-12-10 21:43:35.276616] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:34.559 [2024-12-10 21:43:35.276668] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:34.559 [2024-12-10 21:43:35.276696] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:34.559 [2024-12-10 21:43:35.276768] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:34.559 [2024-12-10 21:43:35.276796] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:34.559 21:43:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.559 21:43:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:34.559 21:43:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.559 21:43:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:34.559 [2024-12-10 21:43:35.327288] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:34.559 BaseBdev1 00:16:34.559 21:43:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.559 21:43:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:16:34.559 21:43:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:16:34.559 21:43:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:34.559 21:43:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:34.559 21:43:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:34.559 21:43:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:34.559 21:43:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:34.559 21:43:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.559 21:43:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:34.818 21:43:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.818 21:43:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:34.818 21:43:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.818 21:43:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:34.818 [ 00:16:34.818 { 00:16:34.818 "name": "BaseBdev1", 00:16:34.818 "aliases": [ 00:16:34.818 "59c37adf-b8b3-46ae-8d33-e35239a223d9" 00:16:34.818 ], 00:16:34.818 "product_name": "Malloc disk", 00:16:34.818 "block_size": 512, 00:16:34.818 "num_blocks": 65536, 00:16:34.818 "uuid": "59c37adf-b8b3-46ae-8d33-e35239a223d9", 00:16:34.818 "assigned_rate_limits": { 00:16:34.818 "rw_ios_per_sec": 0, 00:16:34.818 "rw_mbytes_per_sec": 0, 00:16:34.818 "r_mbytes_per_sec": 0, 00:16:34.818 "w_mbytes_per_sec": 0 00:16:34.818 }, 00:16:34.818 "claimed": true, 00:16:34.818 "claim_type": "exclusive_write", 00:16:34.818 "zoned": false, 00:16:34.818 "supported_io_types": { 00:16:34.818 "read": true, 00:16:34.818 "write": true, 00:16:34.818 "unmap": true, 00:16:34.818 "flush": true, 00:16:34.818 "reset": true, 00:16:34.818 "nvme_admin": false, 00:16:34.818 "nvme_io": false, 00:16:34.818 "nvme_io_md": false, 00:16:34.818 "write_zeroes": true, 00:16:34.818 "zcopy": true, 00:16:34.818 "get_zone_info": false, 00:16:34.818 "zone_management": false, 00:16:34.818 "zone_append": false, 00:16:34.818 "compare": false, 00:16:34.818 "compare_and_write": false, 00:16:34.818 "abort": true, 00:16:34.818 "seek_hole": false, 00:16:34.818 "seek_data": false, 00:16:34.818 "copy": true, 00:16:34.818 "nvme_iov_md": false 00:16:34.818 }, 00:16:34.818 "memory_domains": [ 00:16:34.818 { 00:16:34.818 "dma_device_id": "system", 00:16:34.818 "dma_device_type": 1 00:16:34.818 }, 00:16:34.818 { 00:16:34.818 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:34.818 "dma_device_type": 2 00:16:34.818 } 00:16:34.818 ], 00:16:34.818 "driver_specific": {} 00:16:34.818 } 00:16:34.818 ] 00:16:34.818 21:43:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.818 21:43:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:34.818 21:43:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:34.818 21:43:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:34.818 21:43:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:34.819 21:43:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:34.819 21:43:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:34.819 21:43:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:34.819 21:43:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:34.819 21:43:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:34.819 21:43:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:34.819 21:43:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:34.819 21:43:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:34.819 21:43:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:34.819 21:43:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.819 21:43:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:34.819 21:43:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.819 21:43:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:34.819 "name": "Existed_Raid", 00:16:34.819 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:34.819 "strip_size_kb": 64, 00:16:34.819 "state": "configuring", 00:16:34.819 "raid_level": "raid5f", 00:16:34.819 "superblock": false, 00:16:34.819 "num_base_bdevs": 3, 00:16:34.819 "num_base_bdevs_discovered": 1, 00:16:34.819 "num_base_bdevs_operational": 3, 00:16:34.819 "base_bdevs_list": [ 00:16:34.819 { 00:16:34.819 "name": "BaseBdev1", 00:16:34.819 "uuid": "59c37adf-b8b3-46ae-8d33-e35239a223d9", 00:16:34.819 "is_configured": true, 00:16:34.819 "data_offset": 0, 00:16:34.819 "data_size": 65536 00:16:34.819 }, 00:16:34.819 { 00:16:34.819 "name": "BaseBdev2", 00:16:34.819 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:34.819 "is_configured": false, 00:16:34.819 "data_offset": 0, 00:16:34.819 "data_size": 0 00:16:34.819 }, 00:16:34.819 { 00:16:34.819 "name": "BaseBdev3", 00:16:34.819 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:34.819 "is_configured": false, 00:16:34.819 "data_offset": 0, 00:16:34.819 "data_size": 0 00:16:34.819 } 00:16:34.819 ] 00:16:34.819 }' 00:16:34.819 21:43:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:34.819 21:43:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:35.080 21:43:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:35.080 21:43:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.080 21:43:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:35.080 [2024-12-10 21:43:35.826500] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:35.080 [2024-12-10 21:43:35.826605] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:16:35.080 21:43:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.080 21:43:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:16:35.080 21:43:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.080 21:43:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:35.080 [2024-12-10 21:43:35.838549] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:35.080 [2024-12-10 21:43:35.840780] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:35.080 [2024-12-10 21:43:35.840913] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:35.080 [2024-12-10 21:43:35.840961] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:35.080 [2024-12-10 21:43:35.841004] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:35.080 21:43:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.080 21:43:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:16:35.080 21:43:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:35.080 21:43:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:35.080 21:43:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:35.080 21:43:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:35.080 21:43:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:35.080 21:43:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:35.080 21:43:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:35.080 21:43:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:35.080 21:43:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:35.080 21:43:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:35.080 21:43:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:35.080 21:43:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:35.080 21:43:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:35.080 21:43:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.080 21:43:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:35.341 21:43:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.341 21:43:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:35.341 "name": "Existed_Raid", 00:16:35.341 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:35.341 "strip_size_kb": 64, 00:16:35.341 "state": "configuring", 00:16:35.341 "raid_level": "raid5f", 00:16:35.341 "superblock": false, 00:16:35.341 "num_base_bdevs": 3, 00:16:35.341 "num_base_bdevs_discovered": 1, 00:16:35.341 "num_base_bdevs_operational": 3, 00:16:35.341 "base_bdevs_list": [ 00:16:35.342 { 00:16:35.342 "name": "BaseBdev1", 00:16:35.342 "uuid": "59c37adf-b8b3-46ae-8d33-e35239a223d9", 00:16:35.342 "is_configured": true, 00:16:35.342 "data_offset": 0, 00:16:35.342 "data_size": 65536 00:16:35.342 }, 00:16:35.342 { 00:16:35.342 "name": "BaseBdev2", 00:16:35.342 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:35.342 "is_configured": false, 00:16:35.342 "data_offset": 0, 00:16:35.342 "data_size": 0 00:16:35.342 }, 00:16:35.342 { 00:16:35.342 "name": "BaseBdev3", 00:16:35.342 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:35.342 "is_configured": false, 00:16:35.342 "data_offset": 0, 00:16:35.342 "data_size": 0 00:16:35.342 } 00:16:35.342 ] 00:16:35.342 }' 00:16:35.342 21:43:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:35.342 21:43:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:35.709 21:43:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:35.709 21:43:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.709 21:43:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:35.709 [2024-12-10 21:43:36.371087] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:35.709 BaseBdev2 00:16:35.710 21:43:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.710 21:43:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:16:35.710 21:43:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:16:35.710 21:43:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:35.710 21:43:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:35.710 21:43:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:35.710 21:43:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:35.710 21:43:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:35.710 21:43:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.710 21:43:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:35.710 21:43:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.710 21:43:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:35.710 21:43:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.710 21:43:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:35.710 [ 00:16:35.710 { 00:16:35.710 "name": "BaseBdev2", 00:16:35.710 "aliases": [ 00:16:35.710 "1abfb957-81e2-4701-945f-f7d9c84558c5" 00:16:35.710 ], 00:16:35.710 "product_name": "Malloc disk", 00:16:35.710 "block_size": 512, 00:16:35.710 "num_blocks": 65536, 00:16:35.710 "uuid": "1abfb957-81e2-4701-945f-f7d9c84558c5", 00:16:35.710 "assigned_rate_limits": { 00:16:35.710 "rw_ios_per_sec": 0, 00:16:35.710 "rw_mbytes_per_sec": 0, 00:16:35.710 "r_mbytes_per_sec": 0, 00:16:35.710 "w_mbytes_per_sec": 0 00:16:35.710 }, 00:16:35.710 "claimed": true, 00:16:35.710 "claim_type": "exclusive_write", 00:16:35.710 "zoned": false, 00:16:35.710 "supported_io_types": { 00:16:35.710 "read": true, 00:16:35.710 "write": true, 00:16:35.710 "unmap": true, 00:16:35.710 "flush": true, 00:16:35.710 "reset": true, 00:16:35.710 "nvme_admin": false, 00:16:35.710 "nvme_io": false, 00:16:35.710 "nvme_io_md": false, 00:16:35.710 "write_zeroes": true, 00:16:35.710 "zcopy": true, 00:16:35.710 "get_zone_info": false, 00:16:35.710 "zone_management": false, 00:16:35.710 "zone_append": false, 00:16:35.710 "compare": false, 00:16:35.710 "compare_and_write": false, 00:16:35.710 "abort": true, 00:16:35.710 "seek_hole": false, 00:16:35.710 "seek_data": false, 00:16:35.710 "copy": true, 00:16:35.710 "nvme_iov_md": false 00:16:35.710 }, 00:16:35.710 "memory_domains": [ 00:16:35.710 { 00:16:35.710 "dma_device_id": "system", 00:16:35.710 "dma_device_type": 1 00:16:35.710 }, 00:16:35.710 { 00:16:35.710 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:35.710 "dma_device_type": 2 00:16:35.710 } 00:16:35.710 ], 00:16:35.710 "driver_specific": {} 00:16:35.710 } 00:16:35.710 ] 00:16:35.710 21:43:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.710 21:43:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:35.710 21:43:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:35.710 21:43:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:35.710 21:43:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:35.710 21:43:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:35.710 21:43:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:35.710 21:43:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:35.710 21:43:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:35.710 21:43:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:35.710 21:43:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:35.710 21:43:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:35.710 21:43:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:35.710 21:43:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:35.710 21:43:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:35.710 21:43:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:35.710 21:43:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.710 21:43:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:35.710 21:43:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.710 21:43:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:35.710 "name": "Existed_Raid", 00:16:35.710 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:35.710 "strip_size_kb": 64, 00:16:35.710 "state": "configuring", 00:16:35.710 "raid_level": "raid5f", 00:16:35.710 "superblock": false, 00:16:35.710 "num_base_bdevs": 3, 00:16:35.710 "num_base_bdevs_discovered": 2, 00:16:35.710 "num_base_bdevs_operational": 3, 00:16:35.710 "base_bdevs_list": [ 00:16:35.710 { 00:16:35.710 "name": "BaseBdev1", 00:16:35.710 "uuid": "59c37adf-b8b3-46ae-8d33-e35239a223d9", 00:16:35.710 "is_configured": true, 00:16:35.710 "data_offset": 0, 00:16:35.710 "data_size": 65536 00:16:35.710 }, 00:16:35.710 { 00:16:35.710 "name": "BaseBdev2", 00:16:35.710 "uuid": "1abfb957-81e2-4701-945f-f7d9c84558c5", 00:16:35.710 "is_configured": true, 00:16:35.710 "data_offset": 0, 00:16:35.710 "data_size": 65536 00:16:35.710 }, 00:16:35.710 { 00:16:35.710 "name": "BaseBdev3", 00:16:35.710 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:35.710 "is_configured": false, 00:16:35.710 "data_offset": 0, 00:16:35.710 "data_size": 0 00:16:35.710 } 00:16:35.710 ] 00:16:35.710 }' 00:16:35.710 21:43:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:35.710 21:43:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:36.277 21:43:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:36.277 21:43:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.277 21:43:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:36.277 [2024-12-10 21:43:36.943209] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:36.277 [2024-12-10 21:43:36.943283] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:16:36.277 [2024-12-10 21:43:36.943300] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:16:36.277 [2024-12-10 21:43:36.943772] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:16:36.277 [2024-12-10 21:43:36.950222] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:16:36.278 [2024-12-10 21:43:36.950250] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:16:36.278 [2024-12-10 21:43:36.950540] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:36.278 BaseBdev3 00:16:36.278 21:43:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.278 21:43:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:16:36.278 21:43:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:16:36.278 21:43:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:36.278 21:43:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:36.278 21:43:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:36.278 21:43:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:36.278 21:43:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:36.278 21:43:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.278 21:43:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:36.278 21:43:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.278 21:43:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:36.278 21:43:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.278 21:43:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:36.278 [ 00:16:36.278 { 00:16:36.278 "name": "BaseBdev3", 00:16:36.278 "aliases": [ 00:16:36.278 "58d86d47-bcff-45f4-a94a-12bdb7dd9573" 00:16:36.278 ], 00:16:36.278 "product_name": "Malloc disk", 00:16:36.278 "block_size": 512, 00:16:36.278 "num_blocks": 65536, 00:16:36.278 "uuid": "58d86d47-bcff-45f4-a94a-12bdb7dd9573", 00:16:36.278 "assigned_rate_limits": { 00:16:36.278 "rw_ios_per_sec": 0, 00:16:36.278 "rw_mbytes_per_sec": 0, 00:16:36.278 "r_mbytes_per_sec": 0, 00:16:36.278 "w_mbytes_per_sec": 0 00:16:36.278 }, 00:16:36.278 "claimed": true, 00:16:36.278 "claim_type": "exclusive_write", 00:16:36.278 "zoned": false, 00:16:36.278 "supported_io_types": { 00:16:36.278 "read": true, 00:16:36.278 "write": true, 00:16:36.278 "unmap": true, 00:16:36.278 "flush": true, 00:16:36.278 "reset": true, 00:16:36.278 "nvme_admin": false, 00:16:36.278 "nvme_io": false, 00:16:36.278 "nvme_io_md": false, 00:16:36.278 "write_zeroes": true, 00:16:36.278 "zcopy": true, 00:16:36.278 "get_zone_info": false, 00:16:36.278 "zone_management": false, 00:16:36.278 "zone_append": false, 00:16:36.278 "compare": false, 00:16:36.278 "compare_and_write": false, 00:16:36.278 "abort": true, 00:16:36.278 "seek_hole": false, 00:16:36.278 "seek_data": false, 00:16:36.278 "copy": true, 00:16:36.278 "nvme_iov_md": false 00:16:36.278 }, 00:16:36.278 "memory_domains": [ 00:16:36.278 { 00:16:36.278 "dma_device_id": "system", 00:16:36.278 "dma_device_type": 1 00:16:36.278 }, 00:16:36.278 { 00:16:36.278 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:36.278 "dma_device_type": 2 00:16:36.278 } 00:16:36.278 ], 00:16:36.278 "driver_specific": {} 00:16:36.278 } 00:16:36.278 ] 00:16:36.278 21:43:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.278 21:43:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:36.278 21:43:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:36.278 21:43:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:36.278 21:43:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:16:36.278 21:43:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:36.278 21:43:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:36.278 21:43:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:36.278 21:43:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:36.278 21:43:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:36.278 21:43:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:36.278 21:43:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:36.278 21:43:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:36.278 21:43:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:36.278 21:43:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:36.278 21:43:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.278 21:43:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:36.278 21:43:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:36.278 21:43:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.278 21:43:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:36.278 "name": "Existed_Raid", 00:16:36.278 "uuid": "67dfb0cc-e459-4a3a-b4b0-b97863d29608", 00:16:36.278 "strip_size_kb": 64, 00:16:36.278 "state": "online", 00:16:36.278 "raid_level": "raid5f", 00:16:36.278 "superblock": false, 00:16:36.278 "num_base_bdevs": 3, 00:16:36.278 "num_base_bdevs_discovered": 3, 00:16:36.278 "num_base_bdevs_operational": 3, 00:16:36.278 "base_bdevs_list": [ 00:16:36.278 { 00:16:36.278 "name": "BaseBdev1", 00:16:36.278 "uuid": "59c37adf-b8b3-46ae-8d33-e35239a223d9", 00:16:36.278 "is_configured": true, 00:16:36.278 "data_offset": 0, 00:16:36.278 "data_size": 65536 00:16:36.278 }, 00:16:36.278 { 00:16:36.278 "name": "BaseBdev2", 00:16:36.278 "uuid": "1abfb957-81e2-4701-945f-f7d9c84558c5", 00:16:36.278 "is_configured": true, 00:16:36.278 "data_offset": 0, 00:16:36.278 "data_size": 65536 00:16:36.278 }, 00:16:36.278 { 00:16:36.278 "name": "BaseBdev3", 00:16:36.278 "uuid": "58d86d47-bcff-45f4-a94a-12bdb7dd9573", 00:16:36.278 "is_configured": true, 00:16:36.278 "data_offset": 0, 00:16:36.278 "data_size": 65536 00:16:36.278 } 00:16:36.278 ] 00:16:36.278 }' 00:16:36.278 21:43:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:36.278 21:43:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:36.847 21:43:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:16:36.847 21:43:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:36.847 21:43:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:36.847 21:43:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:36.847 21:43:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:36.847 21:43:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:36.847 21:43:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:36.847 21:43:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:36.847 21:43:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.847 21:43:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:36.847 [2024-12-10 21:43:37.501121] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:36.847 21:43:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.847 21:43:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:36.847 "name": "Existed_Raid", 00:16:36.847 "aliases": [ 00:16:36.847 "67dfb0cc-e459-4a3a-b4b0-b97863d29608" 00:16:36.847 ], 00:16:36.847 "product_name": "Raid Volume", 00:16:36.847 "block_size": 512, 00:16:36.847 "num_blocks": 131072, 00:16:36.847 "uuid": "67dfb0cc-e459-4a3a-b4b0-b97863d29608", 00:16:36.847 "assigned_rate_limits": { 00:16:36.847 "rw_ios_per_sec": 0, 00:16:36.847 "rw_mbytes_per_sec": 0, 00:16:36.847 "r_mbytes_per_sec": 0, 00:16:36.847 "w_mbytes_per_sec": 0 00:16:36.847 }, 00:16:36.847 "claimed": false, 00:16:36.847 "zoned": false, 00:16:36.847 "supported_io_types": { 00:16:36.847 "read": true, 00:16:36.847 "write": true, 00:16:36.847 "unmap": false, 00:16:36.847 "flush": false, 00:16:36.847 "reset": true, 00:16:36.847 "nvme_admin": false, 00:16:36.847 "nvme_io": false, 00:16:36.847 "nvme_io_md": false, 00:16:36.847 "write_zeroes": true, 00:16:36.847 "zcopy": false, 00:16:36.847 "get_zone_info": false, 00:16:36.847 "zone_management": false, 00:16:36.847 "zone_append": false, 00:16:36.847 "compare": false, 00:16:36.847 "compare_and_write": false, 00:16:36.847 "abort": false, 00:16:36.847 "seek_hole": false, 00:16:36.848 "seek_data": false, 00:16:36.848 "copy": false, 00:16:36.848 "nvme_iov_md": false 00:16:36.848 }, 00:16:36.848 "driver_specific": { 00:16:36.848 "raid": { 00:16:36.848 "uuid": "67dfb0cc-e459-4a3a-b4b0-b97863d29608", 00:16:36.848 "strip_size_kb": 64, 00:16:36.848 "state": "online", 00:16:36.848 "raid_level": "raid5f", 00:16:36.848 "superblock": false, 00:16:36.848 "num_base_bdevs": 3, 00:16:36.848 "num_base_bdevs_discovered": 3, 00:16:36.848 "num_base_bdevs_operational": 3, 00:16:36.848 "base_bdevs_list": [ 00:16:36.848 { 00:16:36.848 "name": "BaseBdev1", 00:16:36.848 "uuid": "59c37adf-b8b3-46ae-8d33-e35239a223d9", 00:16:36.848 "is_configured": true, 00:16:36.848 "data_offset": 0, 00:16:36.848 "data_size": 65536 00:16:36.848 }, 00:16:36.848 { 00:16:36.848 "name": "BaseBdev2", 00:16:36.848 "uuid": "1abfb957-81e2-4701-945f-f7d9c84558c5", 00:16:36.848 "is_configured": true, 00:16:36.848 "data_offset": 0, 00:16:36.848 "data_size": 65536 00:16:36.848 }, 00:16:36.848 { 00:16:36.848 "name": "BaseBdev3", 00:16:36.848 "uuid": "58d86d47-bcff-45f4-a94a-12bdb7dd9573", 00:16:36.848 "is_configured": true, 00:16:36.848 "data_offset": 0, 00:16:36.848 "data_size": 65536 00:16:36.848 } 00:16:36.848 ] 00:16:36.848 } 00:16:36.848 } 00:16:36.848 }' 00:16:36.848 21:43:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:36.848 21:43:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:16:36.848 BaseBdev2 00:16:36.848 BaseBdev3' 00:16:36.848 21:43:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:37.107 21:43:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:37.107 21:43:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:37.107 21:43:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:16:37.107 21:43:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.107 21:43:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.107 21:43:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:37.107 21:43:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.107 21:43:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:37.107 21:43:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:37.107 21:43:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:37.107 21:43:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:37.107 21:43:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.107 21:43:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.107 21:43:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:37.107 21:43:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.107 21:43:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:37.107 21:43:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:37.107 21:43:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:37.108 21:43:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:37.108 21:43:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.108 21:43:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.108 21:43:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:37.108 21:43:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.108 21:43:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:37.108 21:43:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:37.108 21:43:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:37.108 21:43:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.108 21:43:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.108 [2024-12-10 21:43:37.800461] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:37.367 21:43:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.367 21:43:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:16:37.367 21:43:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:16:37.367 21:43:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:37.367 21:43:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:16:37.367 21:43:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:16:37.367 21:43:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:16:37.367 21:43:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:37.367 21:43:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:37.367 21:43:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:37.367 21:43:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:37.367 21:43:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:37.367 21:43:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:37.367 21:43:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:37.367 21:43:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:37.367 21:43:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:37.367 21:43:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:37.367 21:43:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.367 21:43:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:37.367 21:43:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.367 21:43:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.367 21:43:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:37.367 "name": "Existed_Raid", 00:16:37.367 "uuid": "67dfb0cc-e459-4a3a-b4b0-b97863d29608", 00:16:37.367 "strip_size_kb": 64, 00:16:37.367 "state": "online", 00:16:37.367 "raid_level": "raid5f", 00:16:37.367 "superblock": false, 00:16:37.367 "num_base_bdevs": 3, 00:16:37.367 "num_base_bdevs_discovered": 2, 00:16:37.367 "num_base_bdevs_operational": 2, 00:16:37.367 "base_bdevs_list": [ 00:16:37.367 { 00:16:37.367 "name": null, 00:16:37.367 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:37.367 "is_configured": false, 00:16:37.367 "data_offset": 0, 00:16:37.367 "data_size": 65536 00:16:37.367 }, 00:16:37.367 { 00:16:37.367 "name": "BaseBdev2", 00:16:37.367 "uuid": "1abfb957-81e2-4701-945f-f7d9c84558c5", 00:16:37.367 "is_configured": true, 00:16:37.367 "data_offset": 0, 00:16:37.367 "data_size": 65536 00:16:37.367 }, 00:16:37.367 { 00:16:37.367 "name": "BaseBdev3", 00:16:37.367 "uuid": "58d86d47-bcff-45f4-a94a-12bdb7dd9573", 00:16:37.367 "is_configured": true, 00:16:37.367 "data_offset": 0, 00:16:37.367 "data_size": 65536 00:16:37.367 } 00:16:37.367 ] 00:16:37.367 }' 00:16:37.367 21:43:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:37.367 21:43:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.626 21:43:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:16:37.626 21:43:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:37.626 21:43:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:37.626 21:43:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:37.626 21:43:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.626 21:43:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.626 21:43:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.626 21:43:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:37.626 21:43:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:37.627 21:43:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:16:37.627 21:43:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.627 21:43:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.627 [2024-12-10 21:43:38.405743] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:37.627 [2024-12-10 21:43:38.405850] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:37.885 [2024-12-10 21:43:38.507096] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:37.885 21:43:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.885 21:43:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:37.885 21:43:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:37.885 21:43:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:37.885 21:43:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:37.885 21:43:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.885 21:43:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.885 21:43:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.885 21:43:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:37.885 21:43:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:37.885 21:43:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:16:37.885 21:43:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.885 21:43:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.885 [2024-12-10 21:43:38.567070] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:37.885 [2024-12-10 21:43:38.567130] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:16:38.144 21:43:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.144 21:43:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:38.144 21:43:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:38.144 21:43:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:38.144 21:43:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:16:38.144 21:43:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.144 21:43:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.144 21:43:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.144 21:43:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:16:38.144 21:43:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:16:38.144 21:43:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:16:38.144 21:43:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:16:38.144 21:43:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:38.144 21:43:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:38.144 21:43:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.144 21:43:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.144 BaseBdev2 00:16:38.144 21:43:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.144 21:43:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:16:38.144 21:43:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:16:38.144 21:43:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:38.144 21:43:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:38.144 21:43:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:38.144 21:43:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:38.144 21:43:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:38.144 21:43:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.144 21:43:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.144 21:43:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.144 21:43:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:38.144 21:43:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.144 21:43:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.144 [ 00:16:38.144 { 00:16:38.144 "name": "BaseBdev2", 00:16:38.144 "aliases": [ 00:16:38.144 "b88e253c-bbbf-4428-b397-41fef8970464" 00:16:38.144 ], 00:16:38.145 "product_name": "Malloc disk", 00:16:38.145 "block_size": 512, 00:16:38.145 "num_blocks": 65536, 00:16:38.145 "uuid": "b88e253c-bbbf-4428-b397-41fef8970464", 00:16:38.145 "assigned_rate_limits": { 00:16:38.145 "rw_ios_per_sec": 0, 00:16:38.145 "rw_mbytes_per_sec": 0, 00:16:38.145 "r_mbytes_per_sec": 0, 00:16:38.145 "w_mbytes_per_sec": 0 00:16:38.145 }, 00:16:38.145 "claimed": false, 00:16:38.145 "zoned": false, 00:16:38.145 "supported_io_types": { 00:16:38.145 "read": true, 00:16:38.145 "write": true, 00:16:38.145 "unmap": true, 00:16:38.145 "flush": true, 00:16:38.145 "reset": true, 00:16:38.145 "nvme_admin": false, 00:16:38.145 "nvme_io": false, 00:16:38.145 "nvme_io_md": false, 00:16:38.145 "write_zeroes": true, 00:16:38.145 "zcopy": true, 00:16:38.145 "get_zone_info": false, 00:16:38.145 "zone_management": false, 00:16:38.145 "zone_append": false, 00:16:38.145 "compare": false, 00:16:38.145 "compare_and_write": false, 00:16:38.145 "abort": true, 00:16:38.145 "seek_hole": false, 00:16:38.145 "seek_data": false, 00:16:38.145 "copy": true, 00:16:38.145 "nvme_iov_md": false 00:16:38.145 }, 00:16:38.145 "memory_domains": [ 00:16:38.145 { 00:16:38.145 "dma_device_id": "system", 00:16:38.145 "dma_device_type": 1 00:16:38.145 }, 00:16:38.145 { 00:16:38.145 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:38.145 "dma_device_type": 2 00:16:38.145 } 00:16:38.145 ], 00:16:38.145 "driver_specific": {} 00:16:38.145 } 00:16:38.145 ] 00:16:38.145 21:43:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.145 21:43:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:38.145 21:43:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:38.145 21:43:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:38.145 21:43:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:38.145 21:43:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.145 21:43:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.145 BaseBdev3 00:16:38.145 21:43:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.145 21:43:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:16:38.145 21:43:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:16:38.145 21:43:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:38.145 21:43:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:38.145 21:43:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:38.145 21:43:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:38.145 21:43:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:38.145 21:43:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.145 21:43:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.145 21:43:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.145 21:43:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:38.145 21:43:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.145 21:43:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.145 [ 00:16:38.145 { 00:16:38.145 "name": "BaseBdev3", 00:16:38.145 "aliases": [ 00:16:38.145 "20809078-2c31-4fe2-b58b-b79adfee62a2" 00:16:38.145 ], 00:16:38.145 "product_name": "Malloc disk", 00:16:38.145 "block_size": 512, 00:16:38.145 "num_blocks": 65536, 00:16:38.145 "uuid": "20809078-2c31-4fe2-b58b-b79adfee62a2", 00:16:38.145 "assigned_rate_limits": { 00:16:38.145 "rw_ios_per_sec": 0, 00:16:38.145 "rw_mbytes_per_sec": 0, 00:16:38.145 "r_mbytes_per_sec": 0, 00:16:38.145 "w_mbytes_per_sec": 0 00:16:38.145 }, 00:16:38.145 "claimed": false, 00:16:38.145 "zoned": false, 00:16:38.145 "supported_io_types": { 00:16:38.145 "read": true, 00:16:38.145 "write": true, 00:16:38.145 "unmap": true, 00:16:38.145 "flush": true, 00:16:38.145 "reset": true, 00:16:38.145 "nvme_admin": false, 00:16:38.145 "nvme_io": false, 00:16:38.145 "nvme_io_md": false, 00:16:38.145 "write_zeroes": true, 00:16:38.145 "zcopy": true, 00:16:38.145 "get_zone_info": false, 00:16:38.145 "zone_management": false, 00:16:38.145 "zone_append": false, 00:16:38.145 "compare": false, 00:16:38.145 "compare_and_write": false, 00:16:38.145 "abort": true, 00:16:38.145 "seek_hole": false, 00:16:38.145 "seek_data": false, 00:16:38.145 "copy": true, 00:16:38.145 "nvme_iov_md": false 00:16:38.145 }, 00:16:38.145 "memory_domains": [ 00:16:38.145 { 00:16:38.145 "dma_device_id": "system", 00:16:38.145 "dma_device_type": 1 00:16:38.145 }, 00:16:38.145 { 00:16:38.145 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:38.145 "dma_device_type": 2 00:16:38.145 } 00:16:38.145 ], 00:16:38.145 "driver_specific": {} 00:16:38.145 } 00:16:38.145 ] 00:16:38.145 21:43:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.145 21:43:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:38.145 21:43:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:38.145 21:43:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:38.145 21:43:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:16:38.145 21:43:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.145 21:43:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.145 [2024-12-10 21:43:38.898996] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:38.145 [2024-12-10 21:43:38.899049] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:38.145 [2024-12-10 21:43:38.899074] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:38.145 [2024-12-10 21:43:38.901125] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:38.145 21:43:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.145 21:43:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:38.145 21:43:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:38.145 21:43:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:38.145 21:43:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:38.145 21:43:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:38.145 21:43:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:38.145 21:43:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:38.145 21:43:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:38.145 21:43:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:38.145 21:43:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:38.145 21:43:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:38.145 21:43:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.145 21:43:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.145 21:43:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:38.404 21:43:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.404 21:43:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:38.404 "name": "Existed_Raid", 00:16:38.404 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:38.404 "strip_size_kb": 64, 00:16:38.404 "state": "configuring", 00:16:38.404 "raid_level": "raid5f", 00:16:38.404 "superblock": false, 00:16:38.404 "num_base_bdevs": 3, 00:16:38.404 "num_base_bdevs_discovered": 2, 00:16:38.404 "num_base_bdevs_operational": 3, 00:16:38.404 "base_bdevs_list": [ 00:16:38.404 { 00:16:38.404 "name": "BaseBdev1", 00:16:38.404 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:38.404 "is_configured": false, 00:16:38.404 "data_offset": 0, 00:16:38.404 "data_size": 0 00:16:38.404 }, 00:16:38.404 { 00:16:38.404 "name": "BaseBdev2", 00:16:38.404 "uuid": "b88e253c-bbbf-4428-b397-41fef8970464", 00:16:38.404 "is_configured": true, 00:16:38.404 "data_offset": 0, 00:16:38.404 "data_size": 65536 00:16:38.404 }, 00:16:38.404 { 00:16:38.404 "name": "BaseBdev3", 00:16:38.404 "uuid": "20809078-2c31-4fe2-b58b-b79adfee62a2", 00:16:38.404 "is_configured": true, 00:16:38.404 "data_offset": 0, 00:16:38.404 "data_size": 65536 00:16:38.404 } 00:16:38.404 ] 00:16:38.404 }' 00:16:38.404 21:43:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:38.404 21:43:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.662 21:43:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:16:38.662 21:43:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.662 21:43:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.662 [2024-12-10 21:43:39.362263] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:38.662 21:43:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.662 21:43:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:38.663 21:43:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:38.663 21:43:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:38.663 21:43:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:38.663 21:43:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:38.663 21:43:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:38.663 21:43:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:38.663 21:43:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:38.663 21:43:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:38.663 21:43:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:38.663 21:43:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:38.663 21:43:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.663 21:43:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.663 21:43:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:38.663 21:43:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.663 21:43:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:38.663 "name": "Existed_Raid", 00:16:38.663 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:38.663 "strip_size_kb": 64, 00:16:38.663 "state": "configuring", 00:16:38.663 "raid_level": "raid5f", 00:16:38.663 "superblock": false, 00:16:38.663 "num_base_bdevs": 3, 00:16:38.663 "num_base_bdevs_discovered": 1, 00:16:38.663 "num_base_bdevs_operational": 3, 00:16:38.663 "base_bdevs_list": [ 00:16:38.663 { 00:16:38.663 "name": "BaseBdev1", 00:16:38.663 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:38.663 "is_configured": false, 00:16:38.663 "data_offset": 0, 00:16:38.663 "data_size": 0 00:16:38.663 }, 00:16:38.663 { 00:16:38.663 "name": null, 00:16:38.663 "uuid": "b88e253c-bbbf-4428-b397-41fef8970464", 00:16:38.663 "is_configured": false, 00:16:38.663 "data_offset": 0, 00:16:38.663 "data_size": 65536 00:16:38.663 }, 00:16:38.663 { 00:16:38.663 "name": "BaseBdev3", 00:16:38.663 "uuid": "20809078-2c31-4fe2-b58b-b79adfee62a2", 00:16:38.663 "is_configured": true, 00:16:38.663 "data_offset": 0, 00:16:38.663 "data_size": 65536 00:16:38.663 } 00:16:38.663 ] 00:16:38.663 }' 00:16:38.663 21:43:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:38.663 21:43:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:39.229 21:43:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:39.229 21:43:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:39.229 21:43:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.229 21:43:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:39.229 21:43:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.229 21:43:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:16:39.229 21:43:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:39.229 21:43:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.229 21:43:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:39.229 [2024-12-10 21:43:39.902002] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:39.229 BaseBdev1 00:16:39.229 21:43:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.229 21:43:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:16:39.229 21:43:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:16:39.229 21:43:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:39.229 21:43:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:39.229 21:43:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:39.229 21:43:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:39.229 21:43:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:39.229 21:43:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.229 21:43:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:39.229 21:43:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.229 21:43:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:39.229 21:43:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.229 21:43:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:39.229 [ 00:16:39.229 { 00:16:39.229 "name": "BaseBdev1", 00:16:39.229 "aliases": [ 00:16:39.229 "88aad341-68d6-4c94-abac-ddabf67af82c" 00:16:39.229 ], 00:16:39.229 "product_name": "Malloc disk", 00:16:39.229 "block_size": 512, 00:16:39.229 "num_blocks": 65536, 00:16:39.229 "uuid": "88aad341-68d6-4c94-abac-ddabf67af82c", 00:16:39.229 "assigned_rate_limits": { 00:16:39.229 "rw_ios_per_sec": 0, 00:16:39.229 "rw_mbytes_per_sec": 0, 00:16:39.229 "r_mbytes_per_sec": 0, 00:16:39.229 "w_mbytes_per_sec": 0 00:16:39.229 }, 00:16:39.229 "claimed": true, 00:16:39.229 "claim_type": "exclusive_write", 00:16:39.229 "zoned": false, 00:16:39.229 "supported_io_types": { 00:16:39.229 "read": true, 00:16:39.229 "write": true, 00:16:39.229 "unmap": true, 00:16:39.229 "flush": true, 00:16:39.229 "reset": true, 00:16:39.229 "nvme_admin": false, 00:16:39.229 "nvme_io": false, 00:16:39.229 "nvme_io_md": false, 00:16:39.229 "write_zeroes": true, 00:16:39.229 "zcopy": true, 00:16:39.229 "get_zone_info": false, 00:16:39.229 "zone_management": false, 00:16:39.229 "zone_append": false, 00:16:39.229 "compare": false, 00:16:39.229 "compare_and_write": false, 00:16:39.229 "abort": true, 00:16:39.229 "seek_hole": false, 00:16:39.229 "seek_data": false, 00:16:39.229 "copy": true, 00:16:39.229 "nvme_iov_md": false 00:16:39.229 }, 00:16:39.229 "memory_domains": [ 00:16:39.229 { 00:16:39.229 "dma_device_id": "system", 00:16:39.229 "dma_device_type": 1 00:16:39.229 }, 00:16:39.229 { 00:16:39.229 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:39.229 "dma_device_type": 2 00:16:39.229 } 00:16:39.229 ], 00:16:39.229 "driver_specific": {} 00:16:39.229 } 00:16:39.229 ] 00:16:39.229 21:43:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.229 21:43:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:39.229 21:43:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:39.229 21:43:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:39.229 21:43:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:39.229 21:43:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:39.229 21:43:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:39.229 21:43:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:39.229 21:43:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:39.229 21:43:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:39.229 21:43:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:39.229 21:43:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:39.229 21:43:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:39.229 21:43:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.229 21:43:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:39.229 21:43:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:39.229 21:43:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.229 21:43:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:39.229 "name": "Existed_Raid", 00:16:39.229 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:39.229 "strip_size_kb": 64, 00:16:39.229 "state": "configuring", 00:16:39.229 "raid_level": "raid5f", 00:16:39.229 "superblock": false, 00:16:39.229 "num_base_bdevs": 3, 00:16:39.229 "num_base_bdevs_discovered": 2, 00:16:39.229 "num_base_bdevs_operational": 3, 00:16:39.229 "base_bdevs_list": [ 00:16:39.229 { 00:16:39.229 "name": "BaseBdev1", 00:16:39.229 "uuid": "88aad341-68d6-4c94-abac-ddabf67af82c", 00:16:39.229 "is_configured": true, 00:16:39.229 "data_offset": 0, 00:16:39.229 "data_size": 65536 00:16:39.229 }, 00:16:39.229 { 00:16:39.229 "name": null, 00:16:39.229 "uuid": "b88e253c-bbbf-4428-b397-41fef8970464", 00:16:39.229 "is_configured": false, 00:16:39.229 "data_offset": 0, 00:16:39.229 "data_size": 65536 00:16:39.229 }, 00:16:39.229 { 00:16:39.229 "name": "BaseBdev3", 00:16:39.229 "uuid": "20809078-2c31-4fe2-b58b-b79adfee62a2", 00:16:39.229 "is_configured": true, 00:16:39.229 "data_offset": 0, 00:16:39.229 "data_size": 65536 00:16:39.229 } 00:16:39.229 ] 00:16:39.229 }' 00:16:39.229 21:43:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:39.229 21:43:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:39.796 21:43:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:39.796 21:43:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.796 21:43:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:39.796 21:43:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:39.796 21:43:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.796 21:43:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:16:39.796 21:43:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:16:39.796 21:43:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.796 21:43:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:39.796 [2024-12-10 21:43:40.433178] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:39.796 21:43:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.796 21:43:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:39.796 21:43:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:39.796 21:43:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:39.796 21:43:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:39.796 21:43:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:39.796 21:43:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:39.796 21:43:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:39.796 21:43:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:39.796 21:43:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:39.796 21:43:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:39.796 21:43:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:39.796 21:43:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.796 21:43:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:39.796 21:43:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:39.796 21:43:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.796 21:43:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:39.796 "name": "Existed_Raid", 00:16:39.796 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:39.796 "strip_size_kb": 64, 00:16:39.796 "state": "configuring", 00:16:39.796 "raid_level": "raid5f", 00:16:39.796 "superblock": false, 00:16:39.796 "num_base_bdevs": 3, 00:16:39.796 "num_base_bdevs_discovered": 1, 00:16:39.797 "num_base_bdevs_operational": 3, 00:16:39.797 "base_bdevs_list": [ 00:16:39.797 { 00:16:39.797 "name": "BaseBdev1", 00:16:39.797 "uuid": "88aad341-68d6-4c94-abac-ddabf67af82c", 00:16:39.797 "is_configured": true, 00:16:39.797 "data_offset": 0, 00:16:39.797 "data_size": 65536 00:16:39.797 }, 00:16:39.797 { 00:16:39.797 "name": null, 00:16:39.797 "uuid": "b88e253c-bbbf-4428-b397-41fef8970464", 00:16:39.797 "is_configured": false, 00:16:39.797 "data_offset": 0, 00:16:39.797 "data_size": 65536 00:16:39.797 }, 00:16:39.797 { 00:16:39.797 "name": null, 00:16:39.797 "uuid": "20809078-2c31-4fe2-b58b-b79adfee62a2", 00:16:39.797 "is_configured": false, 00:16:39.797 "data_offset": 0, 00:16:39.797 "data_size": 65536 00:16:39.797 } 00:16:39.797 ] 00:16:39.797 }' 00:16:39.797 21:43:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:39.797 21:43:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:40.056 21:43:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:40.056 21:43:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.056 21:43:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:40.056 21:43:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:40.315 21:43:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.315 21:43:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:16:40.315 21:43:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:16:40.315 21:43:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.315 21:43:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:40.315 [2024-12-10 21:43:40.876540] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:40.315 21:43:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.315 21:43:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:40.315 21:43:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:40.315 21:43:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:40.315 21:43:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:40.315 21:43:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:40.315 21:43:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:40.315 21:43:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:40.315 21:43:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:40.315 21:43:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:40.315 21:43:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:40.315 21:43:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:40.315 21:43:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.315 21:43:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:40.315 21:43:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:40.315 21:43:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.315 21:43:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:40.315 "name": "Existed_Raid", 00:16:40.315 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:40.315 "strip_size_kb": 64, 00:16:40.315 "state": "configuring", 00:16:40.315 "raid_level": "raid5f", 00:16:40.315 "superblock": false, 00:16:40.315 "num_base_bdevs": 3, 00:16:40.315 "num_base_bdevs_discovered": 2, 00:16:40.315 "num_base_bdevs_operational": 3, 00:16:40.315 "base_bdevs_list": [ 00:16:40.315 { 00:16:40.315 "name": "BaseBdev1", 00:16:40.315 "uuid": "88aad341-68d6-4c94-abac-ddabf67af82c", 00:16:40.315 "is_configured": true, 00:16:40.315 "data_offset": 0, 00:16:40.315 "data_size": 65536 00:16:40.315 }, 00:16:40.315 { 00:16:40.315 "name": null, 00:16:40.315 "uuid": "b88e253c-bbbf-4428-b397-41fef8970464", 00:16:40.315 "is_configured": false, 00:16:40.315 "data_offset": 0, 00:16:40.315 "data_size": 65536 00:16:40.315 }, 00:16:40.315 { 00:16:40.315 "name": "BaseBdev3", 00:16:40.315 "uuid": "20809078-2c31-4fe2-b58b-b79adfee62a2", 00:16:40.315 "is_configured": true, 00:16:40.315 "data_offset": 0, 00:16:40.315 "data_size": 65536 00:16:40.315 } 00:16:40.315 ] 00:16:40.315 }' 00:16:40.315 21:43:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:40.315 21:43:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:40.883 21:43:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:40.883 21:43:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.883 21:43:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:40.883 21:43:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:40.883 21:43:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.883 21:43:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:16:40.883 21:43:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:40.883 21:43:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.883 21:43:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:40.883 [2024-12-10 21:43:41.439647] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:40.883 21:43:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.883 21:43:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:40.883 21:43:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:40.883 21:43:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:40.883 21:43:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:40.883 21:43:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:40.883 21:43:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:40.883 21:43:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:40.883 21:43:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:40.883 21:43:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:40.883 21:43:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:40.883 21:43:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:40.883 21:43:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:40.883 21:43:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.883 21:43:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:40.883 21:43:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.883 21:43:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:40.883 "name": "Existed_Raid", 00:16:40.883 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:40.883 "strip_size_kb": 64, 00:16:40.883 "state": "configuring", 00:16:40.883 "raid_level": "raid5f", 00:16:40.883 "superblock": false, 00:16:40.883 "num_base_bdevs": 3, 00:16:40.883 "num_base_bdevs_discovered": 1, 00:16:40.883 "num_base_bdevs_operational": 3, 00:16:40.883 "base_bdevs_list": [ 00:16:40.883 { 00:16:40.883 "name": null, 00:16:40.883 "uuid": "88aad341-68d6-4c94-abac-ddabf67af82c", 00:16:40.883 "is_configured": false, 00:16:40.883 "data_offset": 0, 00:16:40.883 "data_size": 65536 00:16:40.883 }, 00:16:40.883 { 00:16:40.883 "name": null, 00:16:40.883 "uuid": "b88e253c-bbbf-4428-b397-41fef8970464", 00:16:40.883 "is_configured": false, 00:16:40.883 "data_offset": 0, 00:16:40.883 "data_size": 65536 00:16:40.883 }, 00:16:40.883 { 00:16:40.883 "name": "BaseBdev3", 00:16:40.883 "uuid": "20809078-2c31-4fe2-b58b-b79adfee62a2", 00:16:40.883 "is_configured": true, 00:16:40.883 "data_offset": 0, 00:16:40.883 "data_size": 65536 00:16:40.883 } 00:16:40.883 ] 00:16:40.883 }' 00:16:40.883 21:43:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:40.883 21:43:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.461 21:43:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:41.461 21:43:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:41.461 21:43:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.461 21:43:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.461 21:43:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.461 21:43:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:16:41.461 21:43:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:16:41.461 21:43:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.461 21:43:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.461 [2024-12-10 21:43:42.047209] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:41.461 21:43:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.461 21:43:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:41.461 21:43:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:41.461 21:43:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:41.461 21:43:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:41.461 21:43:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:41.461 21:43:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:41.461 21:43:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:41.461 21:43:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:41.461 21:43:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:41.461 21:43:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:41.461 21:43:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:41.461 21:43:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.461 21:43:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.461 21:43:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:41.461 21:43:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.461 21:43:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:41.461 "name": "Existed_Raid", 00:16:41.461 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:41.461 "strip_size_kb": 64, 00:16:41.461 "state": "configuring", 00:16:41.461 "raid_level": "raid5f", 00:16:41.461 "superblock": false, 00:16:41.461 "num_base_bdevs": 3, 00:16:41.461 "num_base_bdevs_discovered": 2, 00:16:41.461 "num_base_bdevs_operational": 3, 00:16:41.461 "base_bdevs_list": [ 00:16:41.461 { 00:16:41.461 "name": null, 00:16:41.461 "uuid": "88aad341-68d6-4c94-abac-ddabf67af82c", 00:16:41.461 "is_configured": false, 00:16:41.461 "data_offset": 0, 00:16:41.461 "data_size": 65536 00:16:41.461 }, 00:16:41.461 { 00:16:41.461 "name": "BaseBdev2", 00:16:41.461 "uuid": "b88e253c-bbbf-4428-b397-41fef8970464", 00:16:41.461 "is_configured": true, 00:16:41.461 "data_offset": 0, 00:16:41.461 "data_size": 65536 00:16:41.461 }, 00:16:41.461 { 00:16:41.461 "name": "BaseBdev3", 00:16:41.461 "uuid": "20809078-2c31-4fe2-b58b-b79adfee62a2", 00:16:41.461 "is_configured": true, 00:16:41.461 "data_offset": 0, 00:16:41.461 "data_size": 65536 00:16:41.461 } 00:16:41.461 ] 00:16:41.461 }' 00:16:41.461 21:43:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:41.461 21:43:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.719 21:43:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:41.719 21:43:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.719 21:43:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.719 21:43:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:41.979 21:43:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.979 21:43:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:16:41.979 21:43:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:41.979 21:43:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:16:41.979 21:43:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.979 21:43:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.979 21:43:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.979 21:43:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 88aad341-68d6-4c94-abac-ddabf67af82c 00:16:41.979 21:43:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.979 21:43:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.979 [2024-12-10 21:43:42.632275] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:16:41.979 [2024-12-10 21:43:42.632335] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:16:41.979 [2024-12-10 21:43:42.632345] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:16:41.979 [2024-12-10 21:43:42.632617] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:16:41.979 [2024-12-10 21:43:42.638130] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:16:41.979 [2024-12-10 21:43:42.638158] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:16:41.979 [2024-12-10 21:43:42.638457] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:41.979 NewBaseBdev 00:16:41.979 21:43:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.979 21:43:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:16:41.979 21:43:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:16:41.979 21:43:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:41.979 21:43:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:41.979 21:43:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:41.979 21:43:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:41.979 21:43:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:41.979 21:43:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.979 21:43:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.979 21:43:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.979 21:43:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:16:41.979 21:43:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.979 21:43:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.979 [ 00:16:41.979 { 00:16:41.979 "name": "NewBaseBdev", 00:16:41.979 "aliases": [ 00:16:41.979 "88aad341-68d6-4c94-abac-ddabf67af82c" 00:16:41.979 ], 00:16:41.979 "product_name": "Malloc disk", 00:16:41.979 "block_size": 512, 00:16:41.979 "num_blocks": 65536, 00:16:41.979 "uuid": "88aad341-68d6-4c94-abac-ddabf67af82c", 00:16:41.979 "assigned_rate_limits": { 00:16:41.979 "rw_ios_per_sec": 0, 00:16:41.979 "rw_mbytes_per_sec": 0, 00:16:41.979 "r_mbytes_per_sec": 0, 00:16:41.979 "w_mbytes_per_sec": 0 00:16:41.979 }, 00:16:41.979 "claimed": true, 00:16:41.979 "claim_type": "exclusive_write", 00:16:41.979 "zoned": false, 00:16:41.979 "supported_io_types": { 00:16:41.979 "read": true, 00:16:41.979 "write": true, 00:16:41.979 "unmap": true, 00:16:41.979 "flush": true, 00:16:41.979 "reset": true, 00:16:41.979 "nvme_admin": false, 00:16:41.979 "nvme_io": false, 00:16:41.979 "nvme_io_md": false, 00:16:41.979 "write_zeroes": true, 00:16:41.979 "zcopy": true, 00:16:41.979 "get_zone_info": false, 00:16:41.979 "zone_management": false, 00:16:41.979 "zone_append": false, 00:16:41.979 "compare": false, 00:16:41.979 "compare_and_write": false, 00:16:41.979 "abort": true, 00:16:41.979 "seek_hole": false, 00:16:41.979 "seek_data": false, 00:16:41.979 "copy": true, 00:16:41.979 "nvme_iov_md": false 00:16:41.979 }, 00:16:41.979 "memory_domains": [ 00:16:41.979 { 00:16:41.979 "dma_device_id": "system", 00:16:41.979 "dma_device_type": 1 00:16:41.979 }, 00:16:41.979 { 00:16:41.979 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:41.979 "dma_device_type": 2 00:16:41.979 } 00:16:41.979 ], 00:16:41.979 "driver_specific": {} 00:16:41.979 } 00:16:41.979 ] 00:16:41.979 21:43:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.979 21:43:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:41.979 21:43:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:16:41.979 21:43:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:41.979 21:43:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:41.979 21:43:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:41.979 21:43:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:41.979 21:43:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:41.979 21:43:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:41.979 21:43:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:41.979 21:43:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:41.979 21:43:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:41.979 21:43:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:41.979 21:43:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.979 21:43:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:41.979 21:43:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.979 21:43:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.979 21:43:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:41.979 "name": "Existed_Raid", 00:16:41.979 "uuid": "2e7498da-99eb-46f4-9eaa-2ba6e0b59f7f", 00:16:41.979 "strip_size_kb": 64, 00:16:41.979 "state": "online", 00:16:41.979 "raid_level": "raid5f", 00:16:41.979 "superblock": false, 00:16:41.979 "num_base_bdevs": 3, 00:16:41.979 "num_base_bdevs_discovered": 3, 00:16:41.979 "num_base_bdevs_operational": 3, 00:16:41.979 "base_bdevs_list": [ 00:16:41.979 { 00:16:41.979 "name": "NewBaseBdev", 00:16:41.979 "uuid": "88aad341-68d6-4c94-abac-ddabf67af82c", 00:16:41.979 "is_configured": true, 00:16:41.979 "data_offset": 0, 00:16:41.979 "data_size": 65536 00:16:41.979 }, 00:16:41.979 { 00:16:41.979 "name": "BaseBdev2", 00:16:41.979 "uuid": "b88e253c-bbbf-4428-b397-41fef8970464", 00:16:41.979 "is_configured": true, 00:16:41.979 "data_offset": 0, 00:16:41.979 "data_size": 65536 00:16:41.979 }, 00:16:41.979 { 00:16:41.979 "name": "BaseBdev3", 00:16:41.979 "uuid": "20809078-2c31-4fe2-b58b-b79adfee62a2", 00:16:41.979 "is_configured": true, 00:16:41.979 "data_offset": 0, 00:16:41.979 "data_size": 65536 00:16:41.979 } 00:16:41.979 ] 00:16:41.979 }' 00:16:41.979 21:43:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:41.979 21:43:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.546 21:43:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:16:42.546 21:43:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:42.546 21:43:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:42.546 21:43:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:42.546 21:43:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:42.546 21:43:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:42.546 21:43:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:42.546 21:43:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.546 21:43:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.546 21:43:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:42.546 [2024-12-10 21:43:43.172800] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:42.546 21:43:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.546 21:43:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:42.546 "name": "Existed_Raid", 00:16:42.546 "aliases": [ 00:16:42.546 "2e7498da-99eb-46f4-9eaa-2ba6e0b59f7f" 00:16:42.546 ], 00:16:42.546 "product_name": "Raid Volume", 00:16:42.546 "block_size": 512, 00:16:42.546 "num_blocks": 131072, 00:16:42.546 "uuid": "2e7498da-99eb-46f4-9eaa-2ba6e0b59f7f", 00:16:42.546 "assigned_rate_limits": { 00:16:42.546 "rw_ios_per_sec": 0, 00:16:42.546 "rw_mbytes_per_sec": 0, 00:16:42.546 "r_mbytes_per_sec": 0, 00:16:42.546 "w_mbytes_per_sec": 0 00:16:42.546 }, 00:16:42.546 "claimed": false, 00:16:42.546 "zoned": false, 00:16:42.546 "supported_io_types": { 00:16:42.546 "read": true, 00:16:42.546 "write": true, 00:16:42.546 "unmap": false, 00:16:42.546 "flush": false, 00:16:42.546 "reset": true, 00:16:42.546 "nvme_admin": false, 00:16:42.546 "nvme_io": false, 00:16:42.546 "nvme_io_md": false, 00:16:42.546 "write_zeroes": true, 00:16:42.546 "zcopy": false, 00:16:42.546 "get_zone_info": false, 00:16:42.546 "zone_management": false, 00:16:42.546 "zone_append": false, 00:16:42.546 "compare": false, 00:16:42.546 "compare_and_write": false, 00:16:42.546 "abort": false, 00:16:42.546 "seek_hole": false, 00:16:42.546 "seek_data": false, 00:16:42.546 "copy": false, 00:16:42.546 "nvme_iov_md": false 00:16:42.546 }, 00:16:42.546 "driver_specific": { 00:16:42.546 "raid": { 00:16:42.546 "uuid": "2e7498da-99eb-46f4-9eaa-2ba6e0b59f7f", 00:16:42.546 "strip_size_kb": 64, 00:16:42.546 "state": "online", 00:16:42.546 "raid_level": "raid5f", 00:16:42.546 "superblock": false, 00:16:42.546 "num_base_bdevs": 3, 00:16:42.546 "num_base_bdevs_discovered": 3, 00:16:42.546 "num_base_bdevs_operational": 3, 00:16:42.546 "base_bdevs_list": [ 00:16:42.546 { 00:16:42.546 "name": "NewBaseBdev", 00:16:42.546 "uuid": "88aad341-68d6-4c94-abac-ddabf67af82c", 00:16:42.546 "is_configured": true, 00:16:42.546 "data_offset": 0, 00:16:42.546 "data_size": 65536 00:16:42.546 }, 00:16:42.546 { 00:16:42.546 "name": "BaseBdev2", 00:16:42.546 "uuid": "b88e253c-bbbf-4428-b397-41fef8970464", 00:16:42.546 "is_configured": true, 00:16:42.546 "data_offset": 0, 00:16:42.546 "data_size": 65536 00:16:42.546 }, 00:16:42.546 { 00:16:42.546 "name": "BaseBdev3", 00:16:42.546 "uuid": "20809078-2c31-4fe2-b58b-b79adfee62a2", 00:16:42.546 "is_configured": true, 00:16:42.546 "data_offset": 0, 00:16:42.546 "data_size": 65536 00:16:42.546 } 00:16:42.546 ] 00:16:42.546 } 00:16:42.546 } 00:16:42.546 }' 00:16:42.546 21:43:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:42.546 21:43:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:16:42.546 BaseBdev2 00:16:42.546 BaseBdev3' 00:16:42.546 21:43:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:42.546 21:43:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:42.546 21:43:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:42.546 21:43:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:16:42.546 21:43:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.546 21:43:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:42.546 21:43:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.546 21:43:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.805 21:43:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:42.805 21:43:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:42.805 21:43:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:42.805 21:43:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:42.805 21:43:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:42.805 21:43:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.805 21:43:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.805 21:43:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.805 21:43:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:42.805 21:43:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:42.805 21:43:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:42.805 21:43:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:42.805 21:43:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.805 21:43:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.805 21:43:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:42.805 21:43:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.805 21:43:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:42.805 21:43:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:42.805 21:43:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:42.805 21:43:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.805 21:43:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.805 [2024-12-10 21:43:43.440186] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:42.805 [2024-12-10 21:43:43.440225] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:42.805 [2024-12-10 21:43:43.440315] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:42.805 [2024-12-10 21:43:43.440662] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:42.805 [2024-12-10 21:43:43.440685] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:16:42.805 21:43:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.805 21:43:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 80057 00:16:42.805 21:43:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 80057 ']' 00:16:42.805 21:43:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # kill -0 80057 00:16:42.805 21:43:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # uname 00:16:42.805 21:43:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:42.805 21:43:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80057 00:16:42.805 killing process with pid 80057 00:16:42.805 21:43:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:42.805 21:43:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:42.805 21:43:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80057' 00:16:42.805 21:43:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@973 -- # kill 80057 00:16:42.805 [2024-12-10 21:43:43.489174] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:42.805 21:43:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@978 -- # wait 80057 00:16:43.064 [2024-12-10 21:43:43.805832] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:44.441 ************************************ 00:16:44.441 END TEST raid5f_state_function_test 00:16:44.441 ************************************ 00:16:44.441 21:43:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:16:44.441 00:16:44.441 real 0m11.273s 00:16:44.441 user 0m17.861s 00:16:44.441 sys 0m1.942s 00:16:44.441 21:43:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:44.441 21:43:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:44.441 21:43:45 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 3 true 00:16:44.441 21:43:45 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:16:44.441 21:43:45 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:44.441 21:43:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:44.441 ************************************ 00:16:44.441 START TEST raid5f_state_function_test_sb 00:16:44.441 ************************************ 00:16:44.441 21:43:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 3 true 00:16:44.441 21:43:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:16:44.441 21:43:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:16:44.441 21:43:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:16:44.441 21:43:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:16:44.441 21:43:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:16:44.441 21:43:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:44.441 21:43:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:16:44.441 21:43:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:44.441 21:43:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:44.441 21:43:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:16:44.441 21:43:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:44.441 21:43:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:44.441 21:43:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:16:44.441 21:43:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:44.441 21:43:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:44.441 21:43:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:16:44.441 21:43:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:16:44.441 21:43:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:16:44.441 21:43:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:16:44.441 21:43:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:16:44.441 21:43:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:16:44.441 21:43:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:16:44.442 21:43:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:16:44.442 21:43:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:16:44.442 21:43:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:16:44.442 21:43:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:16:44.442 21:43:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=80684 00:16:44.442 21:43:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:16:44.442 21:43:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 80684' 00:16:44.442 Process raid pid: 80684 00:16:44.442 21:43:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 80684 00:16:44.442 21:43:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 80684 ']' 00:16:44.442 21:43:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:44.442 21:43:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:44.442 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:44.442 21:43:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:44.442 21:43:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:44.442 21:43:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:44.701 [2024-12-10 21:43:45.277442] Starting SPDK v25.01-pre git sha1 cec5ba284 / DPDK 24.03.0 initialization... 00:16:44.701 [2024-12-10 21:43:45.277569] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:44.701 [2024-12-10 21:43:45.458259] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:44.960 [2024-12-10 21:43:45.588519] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:16:45.220 [2024-12-10 21:43:45.807095] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:45.220 [2024-12-10 21:43:45.807146] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:45.479 21:43:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:45.479 21:43:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:16:45.479 21:43:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:16:45.479 21:43:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.479 21:43:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:45.479 [2024-12-10 21:43:46.126461] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:45.479 [2024-12-10 21:43:46.126515] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:45.479 [2024-12-10 21:43:46.126527] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:45.479 [2024-12-10 21:43:46.126540] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:45.479 [2024-12-10 21:43:46.126547] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:45.479 [2024-12-10 21:43:46.126558] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:45.479 21:43:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.479 21:43:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:45.479 21:43:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:45.479 21:43:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:45.479 21:43:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:45.479 21:43:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:45.479 21:43:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:45.479 21:43:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:45.479 21:43:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:45.479 21:43:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:45.479 21:43:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:45.479 21:43:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:45.479 21:43:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.479 21:43:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:45.479 21:43:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:45.479 21:43:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.479 21:43:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:45.479 "name": "Existed_Raid", 00:16:45.479 "uuid": "1b8f40b0-56f4-498a-a518-615c053281e8", 00:16:45.479 "strip_size_kb": 64, 00:16:45.479 "state": "configuring", 00:16:45.479 "raid_level": "raid5f", 00:16:45.479 "superblock": true, 00:16:45.479 "num_base_bdevs": 3, 00:16:45.479 "num_base_bdevs_discovered": 0, 00:16:45.479 "num_base_bdevs_operational": 3, 00:16:45.479 "base_bdevs_list": [ 00:16:45.479 { 00:16:45.479 "name": "BaseBdev1", 00:16:45.479 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:45.479 "is_configured": false, 00:16:45.479 "data_offset": 0, 00:16:45.479 "data_size": 0 00:16:45.479 }, 00:16:45.479 { 00:16:45.479 "name": "BaseBdev2", 00:16:45.479 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:45.479 "is_configured": false, 00:16:45.479 "data_offset": 0, 00:16:45.479 "data_size": 0 00:16:45.479 }, 00:16:45.479 { 00:16:45.479 "name": "BaseBdev3", 00:16:45.479 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:45.479 "is_configured": false, 00:16:45.479 "data_offset": 0, 00:16:45.479 "data_size": 0 00:16:45.479 } 00:16:45.479 ] 00:16:45.479 }' 00:16:45.479 21:43:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:45.479 21:43:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:46.048 21:43:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:46.048 21:43:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.048 21:43:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:46.048 [2024-12-10 21:43:46.577602] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:46.048 [2024-12-10 21:43:46.577644] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:16:46.048 21:43:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.048 21:43:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:16:46.048 21:43:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.048 21:43:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:46.048 [2024-12-10 21:43:46.585623] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:46.048 [2024-12-10 21:43:46.585670] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:46.048 [2024-12-10 21:43:46.585681] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:46.048 [2024-12-10 21:43:46.585692] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:46.048 [2024-12-10 21:43:46.585699] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:46.048 [2024-12-10 21:43:46.585708] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:46.048 21:43:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.048 21:43:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:46.048 21:43:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.048 21:43:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:46.048 [2024-12-10 21:43:46.630354] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:46.048 BaseBdev1 00:16:46.048 21:43:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.048 21:43:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:16:46.048 21:43:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:16:46.048 21:43:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:46.048 21:43:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:46.048 21:43:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:46.048 21:43:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:46.048 21:43:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:46.048 21:43:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.048 21:43:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:46.048 21:43:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.048 21:43:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:46.048 21:43:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.048 21:43:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:46.048 [ 00:16:46.048 { 00:16:46.048 "name": "BaseBdev1", 00:16:46.048 "aliases": [ 00:16:46.048 "d86521d2-5f9d-4758-8b15-1a134537be93" 00:16:46.048 ], 00:16:46.048 "product_name": "Malloc disk", 00:16:46.048 "block_size": 512, 00:16:46.048 "num_blocks": 65536, 00:16:46.048 "uuid": "d86521d2-5f9d-4758-8b15-1a134537be93", 00:16:46.048 "assigned_rate_limits": { 00:16:46.048 "rw_ios_per_sec": 0, 00:16:46.048 "rw_mbytes_per_sec": 0, 00:16:46.048 "r_mbytes_per_sec": 0, 00:16:46.048 "w_mbytes_per_sec": 0 00:16:46.048 }, 00:16:46.048 "claimed": true, 00:16:46.048 "claim_type": "exclusive_write", 00:16:46.048 "zoned": false, 00:16:46.048 "supported_io_types": { 00:16:46.048 "read": true, 00:16:46.048 "write": true, 00:16:46.048 "unmap": true, 00:16:46.048 "flush": true, 00:16:46.048 "reset": true, 00:16:46.048 "nvme_admin": false, 00:16:46.048 "nvme_io": false, 00:16:46.048 "nvme_io_md": false, 00:16:46.048 "write_zeroes": true, 00:16:46.048 "zcopy": true, 00:16:46.048 "get_zone_info": false, 00:16:46.048 "zone_management": false, 00:16:46.048 "zone_append": false, 00:16:46.048 "compare": false, 00:16:46.048 "compare_and_write": false, 00:16:46.048 "abort": true, 00:16:46.048 "seek_hole": false, 00:16:46.048 "seek_data": false, 00:16:46.048 "copy": true, 00:16:46.048 "nvme_iov_md": false 00:16:46.048 }, 00:16:46.048 "memory_domains": [ 00:16:46.048 { 00:16:46.048 "dma_device_id": "system", 00:16:46.048 "dma_device_type": 1 00:16:46.048 }, 00:16:46.048 { 00:16:46.048 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:46.048 "dma_device_type": 2 00:16:46.048 } 00:16:46.048 ], 00:16:46.048 "driver_specific": {} 00:16:46.048 } 00:16:46.048 ] 00:16:46.048 21:43:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.048 21:43:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:46.048 21:43:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:46.049 21:43:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:46.049 21:43:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:46.049 21:43:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:46.049 21:43:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:46.049 21:43:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:46.049 21:43:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:46.049 21:43:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:46.049 21:43:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:46.049 21:43:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:46.049 21:43:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:46.049 21:43:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.049 21:43:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:46.049 21:43:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:46.049 21:43:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.049 21:43:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:46.049 "name": "Existed_Raid", 00:16:46.049 "uuid": "fcbc1dc7-f04b-45d0-9452-6dfc27ed1895", 00:16:46.049 "strip_size_kb": 64, 00:16:46.049 "state": "configuring", 00:16:46.049 "raid_level": "raid5f", 00:16:46.049 "superblock": true, 00:16:46.049 "num_base_bdevs": 3, 00:16:46.049 "num_base_bdevs_discovered": 1, 00:16:46.049 "num_base_bdevs_operational": 3, 00:16:46.049 "base_bdevs_list": [ 00:16:46.049 { 00:16:46.049 "name": "BaseBdev1", 00:16:46.049 "uuid": "d86521d2-5f9d-4758-8b15-1a134537be93", 00:16:46.049 "is_configured": true, 00:16:46.049 "data_offset": 2048, 00:16:46.049 "data_size": 63488 00:16:46.049 }, 00:16:46.049 { 00:16:46.049 "name": "BaseBdev2", 00:16:46.049 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:46.049 "is_configured": false, 00:16:46.049 "data_offset": 0, 00:16:46.049 "data_size": 0 00:16:46.049 }, 00:16:46.049 { 00:16:46.049 "name": "BaseBdev3", 00:16:46.049 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:46.049 "is_configured": false, 00:16:46.049 "data_offset": 0, 00:16:46.049 "data_size": 0 00:16:46.049 } 00:16:46.049 ] 00:16:46.049 }' 00:16:46.049 21:43:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:46.049 21:43:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:46.626 21:43:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:46.626 21:43:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.626 21:43:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:46.626 [2024-12-10 21:43:47.129604] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:46.626 [2024-12-10 21:43:47.129765] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:16:46.626 21:43:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.626 21:43:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:16:46.626 21:43:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.626 21:43:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:46.626 [2024-12-10 21:43:47.137667] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:46.626 [2024-12-10 21:43:47.139692] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:46.626 [2024-12-10 21:43:47.139734] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:46.626 [2024-12-10 21:43:47.139745] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:46.626 [2024-12-10 21:43:47.139756] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:46.626 21:43:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.626 21:43:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:16:46.627 21:43:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:46.627 21:43:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:46.627 21:43:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:46.627 21:43:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:46.627 21:43:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:46.627 21:43:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:46.627 21:43:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:46.627 21:43:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:46.627 21:43:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:46.627 21:43:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:46.627 21:43:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:46.627 21:43:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:46.627 21:43:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.627 21:43:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:46.627 21:43:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:46.627 21:43:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.627 21:43:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:46.627 "name": "Existed_Raid", 00:16:46.627 "uuid": "10ce9d10-cc2a-489b-939a-fa57c891131e", 00:16:46.627 "strip_size_kb": 64, 00:16:46.627 "state": "configuring", 00:16:46.627 "raid_level": "raid5f", 00:16:46.627 "superblock": true, 00:16:46.627 "num_base_bdevs": 3, 00:16:46.627 "num_base_bdevs_discovered": 1, 00:16:46.627 "num_base_bdevs_operational": 3, 00:16:46.627 "base_bdevs_list": [ 00:16:46.627 { 00:16:46.627 "name": "BaseBdev1", 00:16:46.627 "uuid": "d86521d2-5f9d-4758-8b15-1a134537be93", 00:16:46.627 "is_configured": true, 00:16:46.627 "data_offset": 2048, 00:16:46.627 "data_size": 63488 00:16:46.627 }, 00:16:46.627 { 00:16:46.627 "name": "BaseBdev2", 00:16:46.627 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:46.627 "is_configured": false, 00:16:46.627 "data_offset": 0, 00:16:46.627 "data_size": 0 00:16:46.627 }, 00:16:46.627 { 00:16:46.627 "name": "BaseBdev3", 00:16:46.627 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:46.627 "is_configured": false, 00:16:46.627 "data_offset": 0, 00:16:46.627 "data_size": 0 00:16:46.627 } 00:16:46.627 ] 00:16:46.627 }' 00:16:46.627 21:43:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:46.627 21:43:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:46.896 21:43:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:46.896 21:43:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.896 21:43:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:46.896 [2024-12-10 21:43:47.628485] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:46.896 BaseBdev2 00:16:46.896 21:43:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.896 21:43:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:16:46.896 21:43:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:16:46.896 21:43:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:46.896 21:43:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:46.896 21:43:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:46.896 21:43:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:46.896 21:43:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:46.896 21:43:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.896 21:43:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:46.896 21:43:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.896 21:43:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:46.896 21:43:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.896 21:43:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:46.896 [ 00:16:46.896 { 00:16:46.896 "name": "BaseBdev2", 00:16:46.896 "aliases": [ 00:16:46.896 "22a14b63-2c88-4ee6-9fe4-d73653036a9a" 00:16:46.896 ], 00:16:46.896 "product_name": "Malloc disk", 00:16:46.896 "block_size": 512, 00:16:46.896 "num_blocks": 65536, 00:16:46.896 "uuid": "22a14b63-2c88-4ee6-9fe4-d73653036a9a", 00:16:46.896 "assigned_rate_limits": { 00:16:46.896 "rw_ios_per_sec": 0, 00:16:46.896 "rw_mbytes_per_sec": 0, 00:16:46.896 "r_mbytes_per_sec": 0, 00:16:46.896 "w_mbytes_per_sec": 0 00:16:46.896 }, 00:16:46.896 "claimed": true, 00:16:46.896 "claim_type": "exclusive_write", 00:16:46.896 "zoned": false, 00:16:46.896 "supported_io_types": { 00:16:46.896 "read": true, 00:16:46.896 "write": true, 00:16:46.896 "unmap": true, 00:16:46.896 "flush": true, 00:16:46.896 "reset": true, 00:16:46.896 "nvme_admin": false, 00:16:46.896 "nvme_io": false, 00:16:46.896 "nvme_io_md": false, 00:16:46.896 "write_zeroes": true, 00:16:46.896 "zcopy": true, 00:16:46.896 "get_zone_info": false, 00:16:46.896 "zone_management": false, 00:16:46.896 "zone_append": false, 00:16:46.896 "compare": false, 00:16:46.896 "compare_and_write": false, 00:16:46.896 "abort": true, 00:16:46.896 "seek_hole": false, 00:16:46.896 "seek_data": false, 00:16:46.896 "copy": true, 00:16:46.896 "nvme_iov_md": false 00:16:46.896 }, 00:16:46.896 "memory_domains": [ 00:16:46.896 { 00:16:46.896 "dma_device_id": "system", 00:16:46.896 "dma_device_type": 1 00:16:46.896 }, 00:16:46.896 { 00:16:46.896 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:46.896 "dma_device_type": 2 00:16:46.896 } 00:16:46.896 ], 00:16:46.896 "driver_specific": {} 00:16:46.896 } 00:16:46.896 ] 00:16:46.896 21:43:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.896 21:43:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:46.896 21:43:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:46.897 21:43:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:46.897 21:43:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:46.897 21:43:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:46.897 21:43:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:46.897 21:43:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:46.897 21:43:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:46.897 21:43:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:46.897 21:43:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:46.897 21:43:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:46.897 21:43:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:46.897 21:43:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:46.897 21:43:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:46.897 21:43:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:46.897 21:43:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.897 21:43:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:47.156 21:43:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.156 21:43:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:47.156 "name": "Existed_Raid", 00:16:47.156 "uuid": "10ce9d10-cc2a-489b-939a-fa57c891131e", 00:16:47.156 "strip_size_kb": 64, 00:16:47.156 "state": "configuring", 00:16:47.156 "raid_level": "raid5f", 00:16:47.156 "superblock": true, 00:16:47.156 "num_base_bdevs": 3, 00:16:47.156 "num_base_bdevs_discovered": 2, 00:16:47.156 "num_base_bdevs_operational": 3, 00:16:47.157 "base_bdevs_list": [ 00:16:47.157 { 00:16:47.157 "name": "BaseBdev1", 00:16:47.157 "uuid": "d86521d2-5f9d-4758-8b15-1a134537be93", 00:16:47.157 "is_configured": true, 00:16:47.157 "data_offset": 2048, 00:16:47.157 "data_size": 63488 00:16:47.157 }, 00:16:47.157 { 00:16:47.157 "name": "BaseBdev2", 00:16:47.157 "uuid": "22a14b63-2c88-4ee6-9fe4-d73653036a9a", 00:16:47.157 "is_configured": true, 00:16:47.157 "data_offset": 2048, 00:16:47.157 "data_size": 63488 00:16:47.157 }, 00:16:47.157 { 00:16:47.157 "name": "BaseBdev3", 00:16:47.157 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:47.157 "is_configured": false, 00:16:47.157 "data_offset": 0, 00:16:47.157 "data_size": 0 00:16:47.157 } 00:16:47.157 ] 00:16:47.157 }' 00:16:47.157 21:43:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:47.157 21:43:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:47.417 21:43:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:47.417 21:43:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.417 21:43:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:47.417 [2024-12-10 21:43:48.143917] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:47.417 [2024-12-10 21:43:48.144346] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:16:47.417 [2024-12-10 21:43:48.144444] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:16:47.417 BaseBdev3 00:16:47.417 [2024-12-10 21:43:48.144796] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:16:47.417 21:43:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.417 21:43:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:16:47.417 21:43:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:16:47.417 21:43:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:47.417 21:43:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:47.417 21:43:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:47.417 21:43:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:47.417 21:43:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:47.417 21:43:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.417 21:43:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:47.417 [2024-12-10 21:43:48.150862] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:16:47.417 [2024-12-10 21:43:48.150946] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:16:47.417 [2024-12-10 21:43:48.151188] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:47.417 21:43:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.417 21:43:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:47.417 21:43:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.417 21:43:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:47.417 [ 00:16:47.417 { 00:16:47.417 "name": "BaseBdev3", 00:16:47.417 "aliases": [ 00:16:47.417 "d131e350-8710-49c6-8862-0d27789fd286" 00:16:47.417 ], 00:16:47.417 "product_name": "Malloc disk", 00:16:47.417 "block_size": 512, 00:16:47.417 "num_blocks": 65536, 00:16:47.417 "uuid": "d131e350-8710-49c6-8862-0d27789fd286", 00:16:47.417 "assigned_rate_limits": { 00:16:47.417 "rw_ios_per_sec": 0, 00:16:47.417 "rw_mbytes_per_sec": 0, 00:16:47.417 "r_mbytes_per_sec": 0, 00:16:47.417 "w_mbytes_per_sec": 0 00:16:47.417 }, 00:16:47.417 "claimed": true, 00:16:47.417 "claim_type": "exclusive_write", 00:16:47.417 "zoned": false, 00:16:47.417 "supported_io_types": { 00:16:47.417 "read": true, 00:16:47.417 "write": true, 00:16:47.417 "unmap": true, 00:16:47.417 "flush": true, 00:16:47.417 "reset": true, 00:16:47.417 "nvme_admin": false, 00:16:47.417 "nvme_io": false, 00:16:47.417 "nvme_io_md": false, 00:16:47.417 "write_zeroes": true, 00:16:47.417 "zcopy": true, 00:16:47.417 "get_zone_info": false, 00:16:47.417 "zone_management": false, 00:16:47.417 "zone_append": false, 00:16:47.417 "compare": false, 00:16:47.417 "compare_and_write": false, 00:16:47.417 "abort": true, 00:16:47.417 "seek_hole": false, 00:16:47.417 "seek_data": false, 00:16:47.417 "copy": true, 00:16:47.417 "nvme_iov_md": false 00:16:47.417 }, 00:16:47.417 "memory_domains": [ 00:16:47.417 { 00:16:47.417 "dma_device_id": "system", 00:16:47.417 "dma_device_type": 1 00:16:47.417 }, 00:16:47.417 { 00:16:47.417 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:47.417 "dma_device_type": 2 00:16:47.417 } 00:16:47.417 ], 00:16:47.417 "driver_specific": {} 00:16:47.417 } 00:16:47.417 ] 00:16:47.417 21:43:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.417 21:43:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:47.417 21:43:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:47.417 21:43:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:47.417 21:43:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:16:47.417 21:43:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:47.417 21:43:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:47.417 21:43:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:47.417 21:43:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:47.417 21:43:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:47.417 21:43:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:47.417 21:43:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:47.417 21:43:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:47.417 21:43:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:47.417 21:43:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:47.417 21:43:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:47.417 21:43:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.417 21:43:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:47.676 21:43:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.676 21:43:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:47.676 "name": "Existed_Raid", 00:16:47.676 "uuid": "10ce9d10-cc2a-489b-939a-fa57c891131e", 00:16:47.676 "strip_size_kb": 64, 00:16:47.676 "state": "online", 00:16:47.676 "raid_level": "raid5f", 00:16:47.676 "superblock": true, 00:16:47.676 "num_base_bdevs": 3, 00:16:47.676 "num_base_bdevs_discovered": 3, 00:16:47.676 "num_base_bdevs_operational": 3, 00:16:47.676 "base_bdevs_list": [ 00:16:47.676 { 00:16:47.676 "name": "BaseBdev1", 00:16:47.676 "uuid": "d86521d2-5f9d-4758-8b15-1a134537be93", 00:16:47.676 "is_configured": true, 00:16:47.676 "data_offset": 2048, 00:16:47.676 "data_size": 63488 00:16:47.676 }, 00:16:47.676 { 00:16:47.676 "name": "BaseBdev2", 00:16:47.676 "uuid": "22a14b63-2c88-4ee6-9fe4-d73653036a9a", 00:16:47.676 "is_configured": true, 00:16:47.676 "data_offset": 2048, 00:16:47.676 "data_size": 63488 00:16:47.676 }, 00:16:47.676 { 00:16:47.676 "name": "BaseBdev3", 00:16:47.676 "uuid": "d131e350-8710-49c6-8862-0d27789fd286", 00:16:47.676 "is_configured": true, 00:16:47.676 "data_offset": 2048, 00:16:47.676 "data_size": 63488 00:16:47.676 } 00:16:47.676 ] 00:16:47.676 }' 00:16:47.676 21:43:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:47.676 21:43:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:47.936 21:43:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:16:47.936 21:43:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:47.936 21:43:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:47.936 21:43:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:47.936 21:43:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:16:47.936 21:43:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:47.936 21:43:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:47.936 21:43:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.936 21:43:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:47.936 21:43:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:47.936 [2024-12-10 21:43:48.649095] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:47.936 21:43:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.936 21:43:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:47.936 "name": "Existed_Raid", 00:16:47.936 "aliases": [ 00:16:47.936 "10ce9d10-cc2a-489b-939a-fa57c891131e" 00:16:47.936 ], 00:16:47.936 "product_name": "Raid Volume", 00:16:47.936 "block_size": 512, 00:16:47.936 "num_blocks": 126976, 00:16:47.936 "uuid": "10ce9d10-cc2a-489b-939a-fa57c891131e", 00:16:47.936 "assigned_rate_limits": { 00:16:47.936 "rw_ios_per_sec": 0, 00:16:47.936 "rw_mbytes_per_sec": 0, 00:16:47.936 "r_mbytes_per_sec": 0, 00:16:47.936 "w_mbytes_per_sec": 0 00:16:47.936 }, 00:16:47.936 "claimed": false, 00:16:47.936 "zoned": false, 00:16:47.936 "supported_io_types": { 00:16:47.936 "read": true, 00:16:47.936 "write": true, 00:16:47.936 "unmap": false, 00:16:47.936 "flush": false, 00:16:47.936 "reset": true, 00:16:47.936 "nvme_admin": false, 00:16:47.936 "nvme_io": false, 00:16:47.936 "nvme_io_md": false, 00:16:47.936 "write_zeroes": true, 00:16:47.936 "zcopy": false, 00:16:47.936 "get_zone_info": false, 00:16:47.936 "zone_management": false, 00:16:47.936 "zone_append": false, 00:16:47.936 "compare": false, 00:16:47.936 "compare_and_write": false, 00:16:47.936 "abort": false, 00:16:47.936 "seek_hole": false, 00:16:47.936 "seek_data": false, 00:16:47.936 "copy": false, 00:16:47.936 "nvme_iov_md": false 00:16:47.936 }, 00:16:47.936 "driver_specific": { 00:16:47.936 "raid": { 00:16:47.936 "uuid": "10ce9d10-cc2a-489b-939a-fa57c891131e", 00:16:47.936 "strip_size_kb": 64, 00:16:47.936 "state": "online", 00:16:47.936 "raid_level": "raid5f", 00:16:47.936 "superblock": true, 00:16:47.936 "num_base_bdevs": 3, 00:16:47.936 "num_base_bdevs_discovered": 3, 00:16:47.936 "num_base_bdevs_operational": 3, 00:16:47.936 "base_bdevs_list": [ 00:16:47.936 { 00:16:47.936 "name": "BaseBdev1", 00:16:47.936 "uuid": "d86521d2-5f9d-4758-8b15-1a134537be93", 00:16:47.936 "is_configured": true, 00:16:47.936 "data_offset": 2048, 00:16:47.936 "data_size": 63488 00:16:47.936 }, 00:16:47.936 { 00:16:47.936 "name": "BaseBdev2", 00:16:47.936 "uuid": "22a14b63-2c88-4ee6-9fe4-d73653036a9a", 00:16:47.936 "is_configured": true, 00:16:47.936 "data_offset": 2048, 00:16:47.936 "data_size": 63488 00:16:47.936 }, 00:16:47.936 { 00:16:47.936 "name": "BaseBdev3", 00:16:47.936 "uuid": "d131e350-8710-49c6-8862-0d27789fd286", 00:16:47.936 "is_configured": true, 00:16:47.936 "data_offset": 2048, 00:16:47.936 "data_size": 63488 00:16:47.936 } 00:16:47.936 ] 00:16:47.936 } 00:16:47.936 } 00:16:47.936 }' 00:16:47.936 21:43:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:48.196 21:43:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:16:48.196 BaseBdev2 00:16:48.196 BaseBdev3' 00:16:48.196 21:43:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:48.197 21:43:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:48.197 21:43:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:48.197 21:43:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:16:48.197 21:43:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.197 21:43:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:48.197 21:43:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:48.197 21:43:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.197 21:43:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:48.197 21:43:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:48.197 21:43:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:48.197 21:43:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:48.197 21:43:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:48.197 21:43:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.197 21:43:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:48.197 21:43:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.197 21:43:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:48.197 21:43:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:48.197 21:43:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:48.197 21:43:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:48.197 21:43:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:48.197 21:43:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.197 21:43:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:48.197 21:43:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.197 21:43:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:48.197 21:43:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:48.197 21:43:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:48.197 21:43:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.197 21:43:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:48.197 [2024-12-10 21:43:48.912546] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:48.457 21:43:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.457 21:43:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:16:48.457 21:43:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:16:48.457 21:43:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:48.457 21:43:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:16:48.457 21:43:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:16:48.457 21:43:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:16:48.457 21:43:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:48.457 21:43:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:48.457 21:43:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:48.457 21:43:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:48.457 21:43:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:48.457 21:43:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:48.457 21:43:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:48.457 21:43:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:48.457 21:43:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:48.457 21:43:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:48.457 21:43:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.457 21:43:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:48.457 21:43:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:48.457 21:43:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.457 21:43:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:48.457 "name": "Existed_Raid", 00:16:48.457 "uuid": "10ce9d10-cc2a-489b-939a-fa57c891131e", 00:16:48.457 "strip_size_kb": 64, 00:16:48.457 "state": "online", 00:16:48.457 "raid_level": "raid5f", 00:16:48.457 "superblock": true, 00:16:48.457 "num_base_bdevs": 3, 00:16:48.457 "num_base_bdevs_discovered": 2, 00:16:48.457 "num_base_bdevs_operational": 2, 00:16:48.457 "base_bdevs_list": [ 00:16:48.457 { 00:16:48.457 "name": null, 00:16:48.457 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:48.457 "is_configured": false, 00:16:48.457 "data_offset": 0, 00:16:48.457 "data_size": 63488 00:16:48.457 }, 00:16:48.457 { 00:16:48.457 "name": "BaseBdev2", 00:16:48.457 "uuid": "22a14b63-2c88-4ee6-9fe4-d73653036a9a", 00:16:48.457 "is_configured": true, 00:16:48.457 "data_offset": 2048, 00:16:48.457 "data_size": 63488 00:16:48.457 }, 00:16:48.457 { 00:16:48.457 "name": "BaseBdev3", 00:16:48.457 "uuid": "d131e350-8710-49c6-8862-0d27789fd286", 00:16:48.457 "is_configured": true, 00:16:48.457 "data_offset": 2048, 00:16:48.457 "data_size": 63488 00:16:48.457 } 00:16:48.457 ] 00:16:48.457 }' 00:16:48.457 21:43:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:48.457 21:43:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:48.717 21:43:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:16:48.717 21:43:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:48.717 21:43:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:48.717 21:43:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.717 21:43:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:48.717 21:43:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:48.717 21:43:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.978 21:43:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:48.978 21:43:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:48.978 21:43:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:16:48.978 21:43:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.978 21:43:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:48.978 [2024-12-10 21:43:49.509697] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:48.978 [2024-12-10 21:43:49.509949] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:48.978 [2024-12-10 21:43:49.623469] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:48.978 21:43:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.978 21:43:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:48.978 21:43:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:48.978 21:43:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:48.978 21:43:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:48.978 21:43:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.978 21:43:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:48.978 21:43:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.978 21:43:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:48.978 21:43:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:48.978 21:43:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:16:48.978 21:43:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.978 21:43:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:48.978 [2024-12-10 21:43:49.683375] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:48.978 [2024-12-10 21:43:49.683520] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:16:49.238 21:43:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.238 21:43:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:49.238 21:43:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:49.238 21:43:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:49.238 21:43:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:16:49.238 21:43:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.238 21:43:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:49.238 21:43:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.238 21:43:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:16:49.238 21:43:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:16:49.238 21:43:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:16:49.238 21:43:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:16:49.238 21:43:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:49.238 21:43:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:49.238 21:43:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.238 21:43:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:49.238 BaseBdev2 00:16:49.238 21:43:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.238 21:43:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:16:49.238 21:43:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:16:49.238 21:43:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:49.238 21:43:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:49.238 21:43:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:49.238 21:43:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:49.238 21:43:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:49.238 21:43:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.238 21:43:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:49.238 21:43:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.238 21:43:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:49.238 21:43:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.238 21:43:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:49.238 [ 00:16:49.238 { 00:16:49.238 "name": "BaseBdev2", 00:16:49.238 "aliases": [ 00:16:49.238 "c437e964-b15b-448c-a786-783daca4d3c6" 00:16:49.238 ], 00:16:49.238 "product_name": "Malloc disk", 00:16:49.238 "block_size": 512, 00:16:49.238 "num_blocks": 65536, 00:16:49.238 "uuid": "c437e964-b15b-448c-a786-783daca4d3c6", 00:16:49.238 "assigned_rate_limits": { 00:16:49.238 "rw_ios_per_sec": 0, 00:16:49.238 "rw_mbytes_per_sec": 0, 00:16:49.238 "r_mbytes_per_sec": 0, 00:16:49.238 "w_mbytes_per_sec": 0 00:16:49.238 }, 00:16:49.238 "claimed": false, 00:16:49.238 "zoned": false, 00:16:49.238 "supported_io_types": { 00:16:49.238 "read": true, 00:16:49.238 "write": true, 00:16:49.238 "unmap": true, 00:16:49.238 "flush": true, 00:16:49.238 "reset": true, 00:16:49.238 "nvme_admin": false, 00:16:49.238 "nvme_io": false, 00:16:49.238 "nvme_io_md": false, 00:16:49.238 "write_zeroes": true, 00:16:49.238 "zcopy": true, 00:16:49.238 "get_zone_info": false, 00:16:49.238 "zone_management": false, 00:16:49.238 "zone_append": false, 00:16:49.238 "compare": false, 00:16:49.238 "compare_and_write": false, 00:16:49.238 "abort": true, 00:16:49.238 "seek_hole": false, 00:16:49.238 "seek_data": false, 00:16:49.238 "copy": true, 00:16:49.238 "nvme_iov_md": false 00:16:49.238 }, 00:16:49.238 "memory_domains": [ 00:16:49.238 { 00:16:49.238 "dma_device_id": "system", 00:16:49.238 "dma_device_type": 1 00:16:49.238 }, 00:16:49.238 { 00:16:49.238 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:49.238 "dma_device_type": 2 00:16:49.238 } 00:16:49.238 ], 00:16:49.238 "driver_specific": {} 00:16:49.238 } 00:16:49.238 ] 00:16:49.238 21:43:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.239 21:43:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:49.239 21:43:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:49.239 21:43:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:49.239 21:43:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:49.239 21:43:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.239 21:43:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:49.239 BaseBdev3 00:16:49.239 21:43:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.239 21:43:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:16:49.239 21:43:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:16:49.239 21:43:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:49.239 21:43:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:49.239 21:43:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:49.239 21:43:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:49.239 21:43:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:49.239 21:43:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.239 21:43:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:49.239 21:43:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.239 21:43:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:49.239 21:43:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.239 21:43:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:49.239 [ 00:16:49.239 { 00:16:49.239 "name": "BaseBdev3", 00:16:49.239 "aliases": [ 00:16:49.239 "ea43a97b-adb0-4fdc-acb2-3e1076d41c65" 00:16:49.239 ], 00:16:49.239 "product_name": "Malloc disk", 00:16:49.239 "block_size": 512, 00:16:49.239 "num_blocks": 65536, 00:16:49.239 "uuid": "ea43a97b-adb0-4fdc-acb2-3e1076d41c65", 00:16:49.239 "assigned_rate_limits": { 00:16:49.239 "rw_ios_per_sec": 0, 00:16:49.239 "rw_mbytes_per_sec": 0, 00:16:49.239 "r_mbytes_per_sec": 0, 00:16:49.239 "w_mbytes_per_sec": 0 00:16:49.239 }, 00:16:49.239 "claimed": false, 00:16:49.239 "zoned": false, 00:16:49.499 "supported_io_types": { 00:16:49.499 "read": true, 00:16:49.499 "write": true, 00:16:49.499 "unmap": true, 00:16:49.499 "flush": true, 00:16:49.499 "reset": true, 00:16:49.499 "nvme_admin": false, 00:16:49.499 "nvme_io": false, 00:16:49.499 "nvme_io_md": false, 00:16:49.499 "write_zeroes": true, 00:16:49.499 "zcopy": true, 00:16:49.499 "get_zone_info": false, 00:16:49.499 "zone_management": false, 00:16:49.499 "zone_append": false, 00:16:49.499 "compare": false, 00:16:49.499 "compare_and_write": false, 00:16:49.499 "abort": true, 00:16:49.499 "seek_hole": false, 00:16:49.499 "seek_data": false, 00:16:49.499 "copy": true, 00:16:49.499 "nvme_iov_md": false 00:16:49.499 }, 00:16:49.499 "memory_domains": [ 00:16:49.499 { 00:16:49.499 "dma_device_id": "system", 00:16:49.499 "dma_device_type": 1 00:16:49.499 }, 00:16:49.499 { 00:16:49.499 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:49.499 "dma_device_type": 2 00:16:49.499 } 00:16:49.499 ], 00:16:49.499 "driver_specific": {} 00:16:49.499 } 00:16:49.499 ] 00:16:49.499 21:43:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.499 21:43:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:49.499 21:43:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:49.499 21:43:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:49.499 21:43:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:16:49.499 21:43:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.499 21:43:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:49.499 [2024-12-10 21:43:50.035642] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:49.499 [2024-12-10 21:43:50.035766] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:49.499 [2024-12-10 21:43:50.035828] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:49.499 [2024-12-10 21:43:50.038028] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:49.499 21:43:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.499 21:43:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:49.499 21:43:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:49.499 21:43:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:49.499 21:43:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:49.499 21:43:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:49.499 21:43:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:49.499 21:43:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:49.499 21:43:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:49.499 21:43:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:49.499 21:43:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:49.499 21:43:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:49.499 21:43:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:49.499 21:43:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.499 21:43:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:49.499 21:43:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.499 21:43:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:49.499 "name": "Existed_Raid", 00:16:49.499 "uuid": "8d1c53d9-84f6-49d4-aaf2-940b93aa2561", 00:16:49.499 "strip_size_kb": 64, 00:16:49.499 "state": "configuring", 00:16:49.499 "raid_level": "raid5f", 00:16:49.499 "superblock": true, 00:16:49.499 "num_base_bdevs": 3, 00:16:49.499 "num_base_bdevs_discovered": 2, 00:16:49.499 "num_base_bdevs_operational": 3, 00:16:49.499 "base_bdevs_list": [ 00:16:49.499 { 00:16:49.499 "name": "BaseBdev1", 00:16:49.499 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:49.499 "is_configured": false, 00:16:49.499 "data_offset": 0, 00:16:49.499 "data_size": 0 00:16:49.499 }, 00:16:49.499 { 00:16:49.499 "name": "BaseBdev2", 00:16:49.499 "uuid": "c437e964-b15b-448c-a786-783daca4d3c6", 00:16:49.499 "is_configured": true, 00:16:49.499 "data_offset": 2048, 00:16:49.499 "data_size": 63488 00:16:49.499 }, 00:16:49.499 { 00:16:49.499 "name": "BaseBdev3", 00:16:49.499 "uuid": "ea43a97b-adb0-4fdc-acb2-3e1076d41c65", 00:16:49.499 "is_configured": true, 00:16:49.499 "data_offset": 2048, 00:16:49.499 "data_size": 63488 00:16:49.499 } 00:16:49.499 ] 00:16:49.499 }' 00:16:49.499 21:43:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:49.499 21:43:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:49.760 21:43:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:16:49.760 21:43:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.760 21:43:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:49.760 [2024-12-10 21:43:50.526798] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:49.760 21:43:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.760 21:43:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:49.760 21:43:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:49.760 21:43:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:49.760 21:43:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:49.760 21:43:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:49.760 21:43:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:49.760 21:43:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:49.760 21:43:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:49.760 21:43:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:49.760 21:43:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:49.760 21:43:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:49.760 21:43:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.760 21:43:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:49.760 21:43:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:50.020 21:43:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.020 21:43:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:50.020 "name": "Existed_Raid", 00:16:50.020 "uuid": "8d1c53d9-84f6-49d4-aaf2-940b93aa2561", 00:16:50.020 "strip_size_kb": 64, 00:16:50.020 "state": "configuring", 00:16:50.020 "raid_level": "raid5f", 00:16:50.020 "superblock": true, 00:16:50.020 "num_base_bdevs": 3, 00:16:50.021 "num_base_bdevs_discovered": 1, 00:16:50.021 "num_base_bdevs_operational": 3, 00:16:50.021 "base_bdevs_list": [ 00:16:50.021 { 00:16:50.021 "name": "BaseBdev1", 00:16:50.021 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:50.021 "is_configured": false, 00:16:50.021 "data_offset": 0, 00:16:50.021 "data_size": 0 00:16:50.021 }, 00:16:50.021 { 00:16:50.021 "name": null, 00:16:50.021 "uuid": "c437e964-b15b-448c-a786-783daca4d3c6", 00:16:50.021 "is_configured": false, 00:16:50.021 "data_offset": 0, 00:16:50.021 "data_size": 63488 00:16:50.021 }, 00:16:50.021 { 00:16:50.021 "name": "BaseBdev3", 00:16:50.021 "uuid": "ea43a97b-adb0-4fdc-acb2-3e1076d41c65", 00:16:50.021 "is_configured": true, 00:16:50.021 "data_offset": 2048, 00:16:50.021 "data_size": 63488 00:16:50.021 } 00:16:50.021 ] 00:16:50.021 }' 00:16:50.021 21:43:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:50.021 21:43:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:50.280 21:43:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:50.280 21:43:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:50.280 21:43:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.280 21:43:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:50.280 21:43:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.280 21:43:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:16:50.280 21:43:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:50.280 21:43:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.280 21:43:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:50.280 [2024-12-10 21:43:51.059286] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:50.280 BaseBdev1 00:16:50.540 21:43:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.540 21:43:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:16:50.540 21:43:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:16:50.540 21:43:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:50.540 21:43:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:50.540 21:43:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:50.540 21:43:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:50.540 21:43:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:50.540 21:43:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.540 21:43:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:50.540 21:43:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.540 21:43:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:50.540 21:43:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.540 21:43:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:50.540 [ 00:16:50.540 { 00:16:50.540 "name": "BaseBdev1", 00:16:50.540 "aliases": [ 00:16:50.540 "7bda1673-e7e7-4d3a-989f-ab438d58df23" 00:16:50.540 ], 00:16:50.540 "product_name": "Malloc disk", 00:16:50.540 "block_size": 512, 00:16:50.540 "num_blocks": 65536, 00:16:50.540 "uuid": "7bda1673-e7e7-4d3a-989f-ab438d58df23", 00:16:50.540 "assigned_rate_limits": { 00:16:50.540 "rw_ios_per_sec": 0, 00:16:50.540 "rw_mbytes_per_sec": 0, 00:16:50.540 "r_mbytes_per_sec": 0, 00:16:50.540 "w_mbytes_per_sec": 0 00:16:50.540 }, 00:16:50.540 "claimed": true, 00:16:50.540 "claim_type": "exclusive_write", 00:16:50.540 "zoned": false, 00:16:50.540 "supported_io_types": { 00:16:50.540 "read": true, 00:16:50.540 "write": true, 00:16:50.540 "unmap": true, 00:16:50.540 "flush": true, 00:16:50.540 "reset": true, 00:16:50.540 "nvme_admin": false, 00:16:50.540 "nvme_io": false, 00:16:50.540 "nvme_io_md": false, 00:16:50.540 "write_zeroes": true, 00:16:50.540 "zcopy": true, 00:16:50.540 "get_zone_info": false, 00:16:50.540 "zone_management": false, 00:16:50.540 "zone_append": false, 00:16:50.540 "compare": false, 00:16:50.540 "compare_and_write": false, 00:16:50.540 "abort": true, 00:16:50.540 "seek_hole": false, 00:16:50.540 "seek_data": false, 00:16:50.540 "copy": true, 00:16:50.540 "nvme_iov_md": false 00:16:50.540 }, 00:16:50.540 "memory_domains": [ 00:16:50.540 { 00:16:50.540 "dma_device_id": "system", 00:16:50.540 "dma_device_type": 1 00:16:50.540 }, 00:16:50.540 { 00:16:50.540 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:50.540 "dma_device_type": 2 00:16:50.540 } 00:16:50.540 ], 00:16:50.540 "driver_specific": {} 00:16:50.540 } 00:16:50.540 ] 00:16:50.540 21:43:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.540 21:43:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:50.540 21:43:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:50.540 21:43:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:50.540 21:43:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:50.540 21:43:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:50.540 21:43:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:50.540 21:43:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:50.540 21:43:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:50.540 21:43:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:50.540 21:43:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:50.540 21:43:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:50.540 21:43:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:50.541 21:43:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:50.541 21:43:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.541 21:43:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:50.541 21:43:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.541 21:43:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:50.541 "name": "Existed_Raid", 00:16:50.541 "uuid": "8d1c53d9-84f6-49d4-aaf2-940b93aa2561", 00:16:50.541 "strip_size_kb": 64, 00:16:50.541 "state": "configuring", 00:16:50.541 "raid_level": "raid5f", 00:16:50.541 "superblock": true, 00:16:50.541 "num_base_bdevs": 3, 00:16:50.541 "num_base_bdevs_discovered": 2, 00:16:50.541 "num_base_bdevs_operational": 3, 00:16:50.541 "base_bdevs_list": [ 00:16:50.541 { 00:16:50.541 "name": "BaseBdev1", 00:16:50.541 "uuid": "7bda1673-e7e7-4d3a-989f-ab438d58df23", 00:16:50.541 "is_configured": true, 00:16:50.541 "data_offset": 2048, 00:16:50.541 "data_size": 63488 00:16:50.541 }, 00:16:50.541 { 00:16:50.541 "name": null, 00:16:50.541 "uuid": "c437e964-b15b-448c-a786-783daca4d3c6", 00:16:50.541 "is_configured": false, 00:16:50.541 "data_offset": 0, 00:16:50.541 "data_size": 63488 00:16:50.541 }, 00:16:50.541 { 00:16:50.541 "name": "BaseBdev3", 00:16:50.541 "uuid": "ea43a97b-adb0-4fdc-acb2-3e1076d41c65", 00:16:50.541 "is_configured": true, 00:16:50.541 "data_offset": 2048, 00:16:50.541 "data_size": 63488 00:16:50.541 } 00:16:50.541 ] 00:16:50.541 }' 00:16:50.541 21:43:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:50.541 21:43:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:50.800 21:43:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:50.800 21:43:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.800 21:43:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:50.800 21:43:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:50.800 21:43:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.060 21:43:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:16:51.060 21:43:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:16:51.060 21:43:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.060 21:43:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:51.060 [2024-12-10 21:43:51.586544] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:51.060 21:43:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.060 21:43:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:51.060 21:43:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:51.060 21:43:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:51.060 21:43:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:51.060 21:43:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:51.060 21:43:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:51.060 21:43:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:51.060 21:43:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:51.060 21:43:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:51.060 21:43:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:51.060 21:43:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:51.060 21:43:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:51.060 21:43:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.060 21:43:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:51.060 21:43:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.060 21:43:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:51.060 "name": "Existed_Raid", 00:16:51.060 "uuid": "8d1c53d9-84f6-49d4-aaf2-940b93aa2561", 00:16:51.060 "strip_size_kb": 64, 00:16:51.060 "state": "configuring", 00:16:51.060 "raid_level": "raid5f", 00:16:51.060 "superblock": true, 00:16:51.060 "num_base_bdevs": 3, 00:16:51.060 "num_base_bdevs_discovered": 1, 00:16:51.060 "num_base_bdevs_operational": 3, 00:16:51.060 "base_bdevs_list": [ 00:16:51.060 { 00:16:51.060 "name": "BaseBdev1", 00:16:51.060 "uuid": "7bda1673-e7e7-4d3a-989f-ab438d58df23", 00:16:51.060 "is_configured": true, 00:16:51.060 "data_offset": 2048, 00:16:51.060 "data_size": 63488 00:16:51.060 }, 00:16:51.060 { 00:16:51.060 "name": null, 00:16:51.060 "uuid": "c437e964-b15b-448c-a786-783daca4d3c6", 00:16:51.060 "is_configured": false, 00:16:51.060 "data_offset": 0, 00:16:51.060 "data_size": 63488 00:16:51.060 }, 00:16:51.060 { 00:16:51.060 "name": null, 00:16:51.060 "uuid": "ea43a97b-adb0-4fdc-acb2-3e1076d41c65", 00:16:51.060 "is_configured": false, 00:16:51.060 "data_offset": 0, 00:16:51.060 "data_size": 63488 00:16:51.060 } 00:16:51.060 ] 00:16:51.060 }' 00:16:51.060 21:43:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:51.060 21:43:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:51.320 21:43:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:51.320 21:43:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:51.320 21:43:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.320 21:43:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:51.320 21:43:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.320 21:43:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:16:51.320 21:43:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:16:51.320 21:43:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.320 21:43:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:51.320 [2024-12-10 21:43:52.089736] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:51.320 21:43:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.320 21:43:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:51.320 21:43:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:51.320 21:43:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:51.320 21:43:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:51.320 21:43:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:51.320 21:43:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:51.320 21:43:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:51.320 21:43:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:51.320 21:43:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:51.320 21:43:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:51.320 21:43:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:51.581 21:43:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:51.581 21:43:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.581 21:43:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:51.581 21:43:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.581 21:43:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:51.581 "name": "Existed_Raid", 00:16:51.581 "uuid": "8d1c53d9-84f6-49d4-aaf2-940b93aa2561", 00:16:51.581 "strip_size_kb": 64, 00:16:51.581 "state": "configuring", 00:16:51.581 "raid_level": "raid5f", 00:16:51.581 "superblock": true, 00:16:51.581 "num_base_bdevs": 3, 00:16:51.581 "num_base_bdevs_discovered": 2, 00:16:51.581 "num_base_bdevs_operational": 3, 00:16:51.581 "base_bdevs_list": [ 00:16:51.581 { 00:16:51.581 "name": "BaseBdev1", 00:16:51.581 "uuid": "7bda1673-e7e7-4d3a-989f-ab438d58df23", 00:16:51.581 "is_configured": true, 00:16:51.581 "data_offset": 2048, 00:16:51.581 "data_size": 63488 00:16:51.581 }, 00:16:51.581 { 00:16:51.581 "name": null, 00:16:51.581 "uuid": "c437e964-b15b-448c-a786-783daca4d3c6", 00:16:51.581 "is_configured": false, 00:16:51.581 "data_offset": 0, 00:16:51.581 "data_size": 63488 00:16:51.581 }, 00:16:51.581 { 00:16:51.581 "name": "BaseBdev3", 00:16:51.581 "uuid": "ea43a97b-adb0-4fdc-acb2-3e1076d41c65", 00:16:51.581 "is_configured": true, 00:16:51.581 "data_offset": 2048, 00:16:51.581 "data_size": 63488 00:16:51.581 } 00:16:51.581 ] 00:16:51.581 }' 00:16:51.581 21:43:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:51.581 21:43:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:51.840 21:43:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:51.840 21:43:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:51.840 21:43:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.840 21:43:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:51.840 21:43:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.840 21:43:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:16:51.840 21:43:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:51.840 21:43:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.840 21:43:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:51.840 [2024-12-10 21:43:52.584906] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:52.100 21:43:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.100 21:43:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:52.100 21:43:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:52.100 21:43:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:52.100 21:43:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:52.100 21:43:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:52.100 21:43:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:52.100 21:43:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:52.100 21:43:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:52.100 21:43:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:52.100 21:43:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:52.100 21:43:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:52.100 21:43:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.100 21:43:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:52.100 21:43:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:52.100 21:43:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.100 21:43:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:52.100 "name": "Existed_Raid", 00:16:52.100 "uuid": "8d1c53d9-84f6-49d4-aaf2-940b93aa2561", 00:16:52.100 "strip_size_kb": 64, 00:16:52.100 "state": "configuring", 00:16:52.100 "raid_level": "raid5f", 00:16:52.100 "superblock": true, 00:16:52.100 "num_base_bdevs": 3, 00:16:52.100 "num_base_bdevs_discovered": 1, 00:16:52.100 "num_base_bdevs_operational": 3, 00:16:52.100 "base_bdevs_list": [ 00:16:52.100 { 00:16:52.100 "name": null, 00:16:52.100 "uuid": "7bda1673-e7e7-4d3a-989f-ab438d58df23", 00:16:52.100 "is_configured": false, 00:16:52.100 "data_offset": 0, 00:16:52.100 "data_size": 63488 00:16:52.100 }, 00:16:52.100 { 00:16:52.100 "name": null, 00:16:52.100 "uuid": "c437e964-b15b-448c-a786-783daca4d3c6", 00:16:52.100 "is_configured": false, 00:16:52.100 "data_offset": 0, 00:16:52.100 "data_size": 63488 00:16:52.100 }, 00:16:52.100 { 00:16:52.100 "name": "BaseBdev3", 00:16:52.100 "uuid": "ea43a97b-adb0-4fdc-acb2-3e1076d41c65", 00:16:52.100 "is_configured": true, 00:16:52.100 "data_offset": 2048, 00:16:52.100 "data_size": 63488 00:16:52.101 } 00:16:52.101 ] 00:16:52.101 }' 00:16:52.101 21:43:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:52.101 21:43:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:52.360 21:43:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:52.360 21:43:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:52.360 21:43:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.360 21:43:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:52.620 21:43:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.620 21:43:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:16:52.620 21:43:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:16:52.620 21:43:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.620 21:43:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:52.620 [2024-12-10 21:43:53.190217] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:52.620 21:43:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.620 21:43:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:52.620 21:43:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:52.620 21:43:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:52.620 21:43:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:52.620 21:43:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:52.620 21:43:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:52.620 21:43:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:52.620 21:43:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:52.620 21:43:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:52.620 21:43:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:52.620 21:43:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:52.620 21:43:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:52.620 21:43:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.620 21:43:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:52.620 21:43:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.620 21:43:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:52.620 "name": "Existed_Raid", 00:16:52.620 "uuid": "8d1c53d9-84f6-49d4-aaf2-940b93aa2561", 00:16:52.620 "strip_size_kb": 64, 00:16:52.620 "state": "configuring", 00:16:52.620 "raid_level": "raid5f", 00:16:52.620 "superblock": true, 00:16:52.620 "num_base_bdevs": 3, 00:16:52.620 "num_base_bdevs_discovered": 2, 00:16:52.620 "num_base_bdevs_operational": 3, 00:16:52.620 "base_bdevs_list": [ 00:16:52.620 { 00:16:52.620 "name": null, 00:16:52.620 "uuid": "7bda1673-e7e7-4d3a-989f-ab438d58df23", 00:16:52.620 "is_configured": false, 00:16:52.620 "data_offset": 0, 00:16:52.620 "data_size": 63488 00:16:52.620 }, 00:16:52.620 { 00:16:52.620 "name": "BaseBdev2", 00:16:52.620 "uuid": "c437e964-b15b-448c-a786-783daca4d3c6", 00:16:52.621 "is_configured": true, 00:16:52.621 "data_offset": 2048, 00:16:52.621 "data_size": 63488 00:16:52.621 }, 00:16:52.621 { 00:16:52.621 "name": "BaseBdev3", 00:16:52.621 "uuid": "ea43a97b-adb0-4fdc-acb2-3e1076d41c65", 00:16:52.621 "is_configured": true, 00:16:52.621 "data_offset": 2048, 00:16:52.621 "data_size": 63488 00:16:52.621 } 00:16:52.621 ] 00:16:52.621 }' 00:16:52.621 21:43:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:52.621 21:43:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:53.191 21:43:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:53.191 21:43:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:53.191 21:43:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.191 21:43:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:53.191 21:43:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.191 21:43:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:16:53.191 21:43:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:16:53.191 21:43:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:53.191 21:43:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.191 21:43:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:53.191 21:43:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.191 21:43:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 7bda1673-e7e7-4d3a-989f-ab438d58df23 00:16:53.191 21:43:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.191 21:43:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:53.191 [2024-12-10 21:43:53.809756] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:16:53.191 [2024-12-10 21:43:53.810026] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:16:53.191 [2024-12-10 21:43:53.810046] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:16:53.191 [2024-12-10 21:43:53.810339] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:16:53.191 NewBaseBdev 00:16:53.191 21:43:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.191 21:43:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:16:53.191 21:43:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:16:53.191 21:43:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:53.191 21:43:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:53.191 21:43:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:53.191 21:43:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:53.191 21:43:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:53.191 21:43:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.191 21:43:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:53.191 [2024-12-10 21:43:53.816893] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:16:53.191 [2024-12-10 21:43:53.816977] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:16:53.191 [2024-12-10 21:43:53.817228] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:53.191 21:43:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.191 21:43:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:16:53.191 21:43:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.191 21:43:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:53.191 [ 00:16:53.191 { 00:16:53.191 "name": "NewBaseBdev", 00:16:53.191 "aliases": [ 00:16:53.191 "7bda1673-e7e7-4d3a-989f-ab438d58df23" 00:16:53.191 ], 00:16:53.191 "product_name": "Malloc disk", 00:16:53.191 "block_size": 512, 00:16:53.191 "num_blocks": 65536, 00:16:53.191 "uuid": "7bda1673-e7e7-4d3a-989f-ab438d58df23", 00:16:53.191 "assigned_rate_limits": { 00:16:53.191 "rw_ios_per_sec": 0, 00:16:53.191 "rw_mbytes_per_sec": 0, 00:16:53.191 "r_mbytes_per_sec": 0, 00:16:53.191 "w_mbytes_per_sec": 0 00:16:53.191 }, 00:16:53.191 "claimed": true, 00:16:53.191 "claim_type": "exclusive_write", 00:16:53.191 "zoned": false, 00:16:53.191 "supported_io_types": { 00:16:53.191 "read": true, 00:16:53.191 "write": true, 00:16:53.191 "unmap": true, 00:16:53.191 "flush": true, 00:16:53.191 "reset": true, 00:16:53.191 "nvme_admin": false, 00:16:53.191 "nvme_io": false, 00:16:53.191 "nvme_io_md": false, 00:16:53.191 "write_zeroes": true, 00:16:53.191 "zcopy": true, 00:16:53.191 "get_zone_info": false, 00:16:53.191 "zone_management": false, 00:16:53.191 "zone_append": false, 00:16:53.191 "compare": false, 00:16:53.191 "compare_and_write": false, 00:16:53.191 "abort": true, 00:16:53.191 "seek_hole": false, 00:16:53.191 "seek_data": false, 00:16:53.191 "copy": true, 00:16:53.191 "nvme_iov_md": false 00:16:53.191 }, 00:16:53.191 "memory_domains": [ 00:16:53.191 { 00:16:53.191 "dma_device_id": "system", 00:16:53.192 "dma_device_type": 1 00:16:53.192 }, 00:16:53.192 { 00:16:53.192 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:53.192 "dma_device_type": 2 00:16:53.192 } 00:16:53.192 ], 00:16:53.192 "driver_specific": {} 00:16:53.192 } 00:16:53.192 ] 00:16:53.192 21:43:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.192 21:43:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:53.192 21:43:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:16:53.192 21:43:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:53.192 21:43:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:53.192 21:43:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:53.192 21:43:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:53.192 21:43:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:53.192 21:43:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:53.192 21:43:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:53.192 21:43:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:53.192 21:43:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:53.192 21:43:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:53.192 21:43:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:53.192 21:43:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.192 21:43:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:53.192 21:43:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.192 21:43:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:53.192 "name": "Existed_Raid", 00:16:53.192 "uuid": "8d1c53d9-84f6-49d4-aaf2-940b93aa2561", 00:16:53.192 "strip_size_kb": 64, 00:16:53.192 "state": "online", 00:16:53.192 "raid_level": "raid5f", 00:16:53.192 "superblock": true, 00:16:53.192 "num_base_bdevs": 3, 00:16:53.192 "num_base_bdevs_discovered": 3, 00:16:53.192 "num_base_bdevs_operational": 3, 00:16:53.192 "base_bdevs_list": [ 00:16:53.192 { 00:16:53.192 "name": "NewBaseBdev", 00:16:53.192 "uuid": "7bda1673-e7e7-4d3a-989f-ab438d58df23", 00:16:53.192 "is_configured": true, 00:16:53.192 "data_offset": 2048, 00:16:53.192 "data_size": 63488 00:16:53.192 }, 00:16:53.192 { 00:16:53.192 "name": "BaseBdev2", 00:16:53.192 "uuid": "c437e964-b15b-448c-a786-783daca4d3c6", 00:16:53.192 "is_configured": true, 00:16:53.192 "data_offset": 2048, 00:16:53.192 "data_size": 63488 00:16:53.192 }, 00:16:53.192 { 00:16:53.192 "name": "BaseBdev3", 00:16:53.192 "uuid": "ea43a97b-adb0-4fdc-acb2-3e1076d41c65", 00:16:53.192 "is_configured": true, 00:16:53.192 "data_offset": 2048, 00:16:53.192 "data_size": 63488 00:16:53.192 } 00:16:53.192 ] 00:16:53.192 }' 00:16:53.192 21:43:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:53.192 21:43:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:53.762 21:43:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:16:53.762 21:43:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:53.762 21:43:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:53.762 21:43:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:53.762 21:43:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:16:53.762 21:43:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:53.762 21:43:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:53.762 21:43:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:53.762 21:43:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.762 21:43:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:53.762 [2024-12-10 21:43:54.320384] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:53.762 21:43:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.762 21:43:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:53.762 "name": "Existed_Raid", 00:16:53.762 "aliases": [ 00:16:53.762 "8d1c53d9-84f6-49d4-aaf2-940b93aa2561" 00:16:53.762 ], 00:16:53.762 "product_name": "Raid Volume", 00:16:53.762 "block_size": 512, 00:16:53.762 "num_blocks": 126976, 00:16:53.762 "uuid": "8d1c53d9-84f6-49d4-aaf2-940b93aa2561", 00:16:53.762 "assigned_rate_limits": { 00:16:53.762 "rw_ios_per_sec": 0, 00:16:53.762 "rw_mbytes_per_sec": 0, 00:16:53.762 "r_mbytes_per_sec": 0, 00:16:53.762 "w_mbytes_per_sec": 0 00:16:53.762 }, 00:16:53.762 "claimed": false, 00:16:53.762 "zoned": false, 00:16:53.762 "supported_io_types": { 00:16:53.762 "read": true, 00:16:53.762 "write": true, 00:16:53.762 "unmap": false, 00:16:53.762 "flush": false, 00:16:53.762 "reset": true, 00:16:53.762 "nvme_admin": false, 00:16:53.762 "nvme_io": false, 00:16:53.762 "nvme_io_md": false, 00:16:53.762 "write_zeroes": true, 00:16:53.762 "zcopy": false, 00:16:53.762 "get_zone_info": false, 00:16:53.762 "zone_management": false, 00:16:53.762 "zone_append": false, 00:16:53.762 "compare": false, 00:16:53.762 "compare_and_write": false, 00:16:53.762 "abort": false, 00:16:53.762 "seek_hole": false, 00:16:53.762 "seek_data": false, 00:16:53.762 "copy": false, 00:16:53.762 "nvme_iov_md": false 00:16:53.762 }, 00:16:53.762 "driver_specific": { 00:16:53.762 "raid": { 00:16:53.762 "uuid": "8d1c53d9-84f6-49d4-aaf2-940b93aa2561", 00:16:53.762 "strip_size_kb": 64, 00:16:53.762 "state": "online", 00:16:53.762 "raid_level": "raid5f", 00:16:53.762 "superblock": true, 00:16:53.762 "num_base_bdevs": 3, 00:16:53.762 "num_base_bdevs_discovered": 3, 00:16:53.762 "num_base_bdevs_operational": 3, 00:16:53.762 "base_bdevs_list": [ 00:16:53.762 { 00:16:53.762 "name": "NewBaseBdev", 00:16:53.762 "uuid": "7bda1673-e7e7-4d3a-989f-ab438d58df23", 00:16:53.762 "is_configured": true, 00:16:53.762 "data_offset": 2048, 00:16:53.762 "data_size": 63488 00:16:53.762 }, 00:16:53.762 { 00:16:53.762 "name": "BaseBdev2", 00:16:53.762 "uuid": "c437e964-b15b-448c-a786-783daca4d3c6", 00:16:53.762 "is_configured": true, 00:16:53.762 "data_offset": 2048, 00:16:53.762 "data_size": 63488 00:16:53.762 }, 00:16:53.762 { 00:16:53.762 "name": "BaseBdev3", 00:16:53.762 "uuid": "ea43a97b-adb0-4fdc-acb2-3e1076d41c65", 00:16:53.762 "is_configured": true, 00:16:53.762 "data_offset": 2048, 00:16:53.762 "data_size": 63488 00:16:53.762 } 00:16:53.762 ] 00:16:53.762 } 00:16:53.762 } 00:16:53.762 }' 00:16:53.762 21:43:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:53.762 21:43:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:16:53.762 BaseBdev2 00:16:53.762 BaseBdev3' 00:16:53.762 21:43:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:53.762 21:43:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:53.762 21:43:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:53.762 21:43:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:16:53.762 21:43:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.762 21:43:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:53.762 21:43:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:53.762 21:43:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.762 21:43:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:53.762 21:43:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:53.762 21:43:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:53.762 21:43:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:53.762 21:43:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.762 21:43:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:53.762 21:43:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:53.762 21:43:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.762 21:43:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:53.762 21:43:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:53.762 21:43:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:53.762 21:43:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:53.762 21:43:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.762 21:43:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:53.762 21:43:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:54.022 21:43:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.022 21:43:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:54.022 21:43:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:54.022 21:43:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:54.022 21:43:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.022 21:43:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:54.022 [2024-12-10 21:43:54.583815] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:54.022 [2024-12-10 21:43:54.583851] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:54.022 [2024-12-10 21:43:54.583944] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:54.022 [2024-12-10 21:43:54.584268] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:54.022 [2024-12-10 21:43:54.584285] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:16:54.022 21:43:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.022 21:43:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 80684 00:16:54.022 21:43:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 80684 ']' 00:16:54.022 21:43:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 80684 00:16:54.022 21:43:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:16:54.022 21:43:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:54.022 21:43:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80684 00:16:54.022 21:43:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:54.022 21:43:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:54.022 21:43:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80684' 00:16:54.022 killing process with pid 80684 00:16:54.022 21:43:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 80684 00:16:54.022 [2024-12-10 21:43:54.623378] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:54.022 21:43:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 80684 00:16:54.281 [2024-12-10 21:43:54.951838] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:55.660 ************************************ 00:16:55.660 END TEST raid5f_state_function_test_sb 00:16:55.660 ************************************ 00:16:55.660 21:43:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:16:55.660 00:16:55.660 real 0m11.036s 00:16:55.660 user 0m17.435s 00:16:55.660 sys 0m1.963s 00:16:55.660 21:43:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:55.660 21:43:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:55.660 21:43:56 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 3 00:16:55.660 21:43:56 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:16:55.660 21:43:56 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:55.660 21:43:56 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:55.660 ************************************ 00:16:55.660 START TEST raid5f_superblock_test 00:16:55.660 ************************************ 00:16:55.660 21:43:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid5f 3 00:16:55.660 21:43:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:16:55.660 21:43:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:16:55.660 21:43:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:16:55.660 21:43:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:16:55.660 21:43:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:16:55.660 21:43:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:16:55.660 21:43:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:16:55.660 21:43:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:16:55.660 21:43:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:16:55.660 21:43:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:16:55.660 21:43:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:16:55.660 21:43:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:16:55.660 21:43:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:16:55.660 21:43:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:16:55.660 21:43:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:16:55.660 21:43:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:16:55.660 21:43:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=81309 00:16:55.660 21:43:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:16:55.660 21:43:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 81309 00:16:55.660 21:43:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 81309 ']' 00:16:55.660 21:43:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:55.660 21:43:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:55.660 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:55.660 21:43:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:55.660 21:43:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:55.660 21:43:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.660 [2024-12-10 21:43:56.366084] Starting SPDK v25.01-pre git sha1 cec5ba284 / DPDK 24.03.0 initialization... 00:16:55.660 [2024-12-10 21:43:56.366299] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81309 ] 00:16:55.919 [2024-12-10 21:43:56.540396] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:55.919 [2024-12-10 21:43:56.671674] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:16:56.179 [2024-12-10 21:43:56.904173] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:56.179 [2024-12-10 21:43:56.904243] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:56.748 21:43:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:56.748 21:43:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:16:56.748 21:43:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:16:56.748 21:43:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:56.748 21:43:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:16:56.748 21:43:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:16:56.748 21:43:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:16:56.748 21:43:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:56.748 21:43:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:56.748 21:43:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:56.748 21:43:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:16:56.748 21:43:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.748 21:43:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.748 malloc1 00:16:56.748 21:43:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.748 21:43:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:56.748 21:43:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.748 21:43:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.748 [2024-12-10 21:43:57.336480] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:56.748 [2024-12-10 21:43:57.336540] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:56.748 [2024-12-10 21:43:57.336562] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:56.748 [2024-12-10 21:43:57.336572] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:56.748 [2024-12-10 21:43:57.338875] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:56.748 [2024-12-10 21:43:57.338913] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:56.748 pt1 00:16:56.748 21:43:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.748 21:43:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:56.748 21:43:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:56.749 21:43:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:16:56.749 21:43:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:16:56.749 21:43:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:16:56.749 21:43:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:56.749 21:43:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:56.749 21:43:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:56.749 21:43:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:16:56.749 21:43:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.749 21:43:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.749 malloc2 00:16:56.749 21:43:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.749 21:43:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:56.749 21:43:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.749 21:43:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.749 [2024-12-10 21:43:57.397388] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:56.749 [2024-12-10 21:43:57.397474] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:56.749 [2024-12-10 21:43:57.397498] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:56.749 [2024-12-10 21:43:57.397508] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:56.749 [2024-12-10 21:43:57.399845] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:56.749 [2024-12-10 21:43:57.399891] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:56.749 pt2 00:16:56.749 21:43:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.749 21:43:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:56.749 21:43:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:56.749 21:43:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:16:56.749 21:43:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:16:56.749 21:43:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:16:56.749 21:43:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:56.749 21:43:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:56.749 21:43:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:56.749 21:43:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:16:56.749 21:43:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.749 21:43:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.749 malloc3 00:16:56.749 21:43:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.749 21:43:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:56.749 21:43:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.749 21:43:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.749 [2024-12-10 21:43:57.467747] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:56.749 [2024-12-10 21:43:57.467875] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:56.749 [2024-12-10 21:43:57.467943] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:56.749 [2024-12-10 21:43:57.468117] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:56.749 [2024-12-10 21:43:57.470604] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:56.749 [2024-12-10 21:43:57.470680] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:56.749 pt3 00:16:56.749 21:43:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.749 21:43:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:56.749 21:43:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:56.749 21:43:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:16:56.749 21:43:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.749 21:43:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.749 [2024-12-10 21:43:57.479788] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:56.749 [2024-12-10 21:43:57.481887] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:56.749 [2024-12-10 21:43:57.482010] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:56.749 [2024-12-10 21:43:57.482242] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:56.749 [2024-12-10 21:43:57.482306] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:16:56.749 [2024-12-10 21:43:57.482620] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:16:56.749 [2024-12-10 21:43:57.489115] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:56.749 [2024-12-10 21:43:57.489180] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:56.749 [2024-12-10 21:43:57.489471] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:56.749 21:43:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.749 21:43:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:56.749 21:43:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:56.749 21:43:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:56.749 21:43:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:56.749 21:43:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:56.749 21:43:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:56.749 21:43:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:56.749 21:43:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:56.749 21:43:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:56.749 21:43:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:56.749 21:43:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:56.749 21:43:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:56.749 21:43:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.749 21:43:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.749 21:43:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.009 21:43:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:57.009 "name": "raid_bdev1", 00:16:57.009 "uuid": "d693530d-5fdf-415f-8ae8-b74e86c00791", 00:16:57.009 "strip_size_kb": 64, 00:16:57.009 "state": "online", 00:16:57.009 "raid_level": "raid5f", 00:16:57.009 "superblock": true, 00:16:57.009 "num_base_bdevs": 3, 00:16:57.009 "num_base_bdevs_discovered": 3, 00:16:57.009 "num_base_bdevs_operational": 3, 00:16:57.009 "base_bdevs_list": [ 00:16:57.009 { 00:16:57.009 "name": "pt1", 00:16:57.009 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:57.009 "is_configured": true, 00:16:57.009 "data_offset": 2048, 00:16:57.009 "data_size": 63488 00:16:57.009 }, 00:16:57.009 { 00:16:57.009 "name": "pt2", 00:16:57.009 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:57.009 "is_configured": true, 00:16:57.009 "data_offset": 2048, 00:16:57.009 "data_size": 63488 00:16:57.009 }, 00:16:57.009 { 00:16:57.009 "name": "pt3", 00:16:57.009 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:57.009 "is_configured": true, 00:16:57.009 "data_offset": 2048, 00:16:57.009 "data_size": 63488 00:16:57.009 } 00:16:57.009 ] 00:16:57.009 }' 00:16:57.009 21:43:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:57.009 21:43:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.269 21:43:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:16:57.269 21:43:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:16:57.269 21:43:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:57.269 21:43:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:57.269 21:43:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:57.269 21:43:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:57.269 21:43:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:57.269 21:43:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:57.269 21:43:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.269 21:43:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.269 [2024-12-10 21:43:57.960565] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:57.269 21:43:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.269 21:43:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:57.269 "name": "raid_bdev1", 00:16:57.269 "aliases": [ 00:16:57.269 "d693530d-5fdf-415f-8ae8-b74e86c00791" 00:16:57.269 ], 00:16:57.269 "product_name": "Raid Volume", 00:16:57.269 "block_size": 512, 00:16:57.269 "num_blocks": 126976, 00:16:57.269 "uuid": "d693530d-5fdf-415f-8ae8-b74e86c00791", 00:16:57.269 "assigned_rate_limits": { 00:16:57.269 "rw_ios_per_sec": 0, 00:16:57.269 "rw_mbytes_per_sec": 0, 00:16:57.269 "r_mbytes_per_sec": 0, 00:16:57.269 "w_mbytes_per_sec": 0 00:16:57.269 }, 00:16:57.269 "claimed": false, 00:16:57.269 "zoned": false, 00:16:57.269 "supported_io_types": { 00:16:57.269 "read": true, 00:16:57.269 "write": true, 00:16:57.269 "unmap": false, 00:16:57.269 "flush": false, 00:16:57.269 "reset": true, 00:16:57.269 "nvme_admin": false, 00:16:57.269 "nvme_io": false, 00:16:57.269 "nvme_io_md": false, 00:16:57.269 "write_zeroes": true, 00:16:57.269 "zcopy": false, 00:16:57.269 "get_zone_info": false, 00:16:57.269 "zone_management": false, 00:16:57.269 "zone_append": false, 00:16:57.269 "compare": false, 00:16:57.269 "compare_and_write": false, 00:16:57.269 "abort": false, 00:16:57.269 "seek_hole": false, 00:16:57.269 "seek_data": false, 00:16:57.269 "copy": false, 00:16:57.269 "nvme_iov_md": false 00:16:57.269 }, 00:16:57.269 "driver_specific": { 00:16:57.269 "raid": { 00:16:57.269 "uuid": "d693530d-5fdf-415f-8ae8-b74e86c00791", 00:16:57.269 "strip_size_kb": 64, 00:16:57.269 "state": "online", 00:16:57.269 "raid_level": "raid5f", 00:16:57.269 "superblock": true, 00:16:57.269 "num_base_bdevs": 3, 00:16:57.269 "num_base_bdevs_discovered": 3, 00:16:57.269 "num_base_bdevs_operational": 3, 00:16:57.269 "base_bdevs_list": [ 00:16:57.269 { 00:16:57.269 "name": "pt1", 00:16:57.269 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:57.269 "is_configured": true, 00:16:57.269 "data_offset": 2048, 00:16:57.269 "data_size": 63488 00:16:57.269 }, 00:16:57.269 { 00:16:57.269 "name": "pt2", 00:16:57.269 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:57.269 "is_configured": true, 00:16:57.269 "data_offset": 2048, 00:16:57.269 "data_size": 63488 00:16:57.269 }, 00:16:57.269 { 00:16:57.269 "name": "pt3", 00:16:57.269 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:57.269 "is_configured": true, 00:16:57.269 "data_offset": 2048, 00:16:57.269 "data_size": 63488 00:16:57.269 } 00:16:57.269 ] 00:16:57.269 } 00:16:57.269 } 00:16:57.269 }' 00:16:57.269 21:43:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:57.269 21:43:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:16:57.269 pt2 00:16:57.269 pt3' 00:16:57.269 21:43:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:57.529 21:43:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:57.529 21:43:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:57.530 21:43:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:57.530 21:43:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:16:57.530 21:43:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.530 21:43:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.530 21:43:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.530 21:43:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:57.530 21:43:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:57.530 21:43:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:57.530 21:43:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:16:57.530 21:43:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:57.530 21:43:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.530 21:43:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.530 21:43:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.530 21:43:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:57.530 21:43:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:57.530 21:43:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:57.530 21:43:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:16:57.530 21:43:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.530 21:43:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.530 21:43:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:57.530 21:43:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.530 21:43:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:57.530 21:43:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:57.530 21:43:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:57.530 21:43:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:16:57.530 21:43:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.530 21:43:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.530 [2024-12-10 21:43:58.212085] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:57.530 21:43:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.530 21:43:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=d693530d-5fdf-415f-8ae8-b74e86c00791 00:16:57.530 21:43:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z d693530d-5fdf-415f-8ae8-b74e86c00791 ']' 00:16:57.530 21:43:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:57.530 21:43:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.530 21:43:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.530 [2024-12-10 21:43:58.259803] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:57.530 [2024-12-10 21:43:58.259897] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:57.530 [2024-12-10 21:43:58.260033] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:57.530 [2024-12-10 21:43:58.260155] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:57.530 [2024-12-10 21:43:58.260220] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:57.530 21:43:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.530 21:43:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:57.530 21:43:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.530 21:43:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.530 21:43:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:16:57.530 21:43:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.789 21:43:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:16:57.789 21:43:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:16:57.789 21:43:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:57.789 21:43:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:16:57.789 21:43:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.789 21:43:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.789 21:43:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.789 21:43:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:57.789 21:43:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:16:57.789 21:43:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.789 21:43:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.789 21:43:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.789 21:43:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:57.789 21:43:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:16:57.789 21:43:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.789 21:43:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.789 21:43:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.789 21:43:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:16:57.789 21:43:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.789 21:43:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.789 21:43:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:16:57.789 21:43:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.789 21:43:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:16:57.789 21:43:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:16:57.789 21:43:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:16:57.789 21:43:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:16:57.789 21:43:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:16:57.789 21:43:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:57.789 21:43:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:16:57.789 21:43:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:57.789 21:43:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:16:57.789 21:43:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.789 21:43:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.789 [2024-12-10 21:43:58.415624] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:16:57.789 [2024-12-10 21:43:58.417852] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:16:57.789 [2024-12-10 21:43:58.417967] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:16:57.789 [2024-12-10 21:43:58.418046] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:16:57.789 [2024-12-10 21:43:58.418142] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:16:57.789 [2024-12-10 21:43:58.418223] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:16:57.789 [2024-12-10 21:43:58.418310] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:57.789 [2024-12-10 21:43:58.418346] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:16:57.789 request: 00:16:57.789 { 00:16:57.789 "name": "raid_bdev1", 00:16:57.789 "raid_level": "raid5f", 00:16:57.789 "base_bdevs": [ 00:16:57.789 "malloc1", 00:16:57.789 "malloc2", 00:16:57.789 "malloc3" 00:16:57.789 ], 00:16:57.789 "strip_size_kb": 64, 00:16:57.789 "superblock": false, 00:16:57.789 "method": "bdev_raid_create", 00:16:57.789 "req_id": 1 00:16:57.789 } 00:16:57.789 Got JSON-RPC error response 00:16:57.789 response: 00:16:57.789 { 00:16:57.789 "code": -17, 00:16:57.789 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:16:57.789 } 00:16:57.789 21:43:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:16:57.789 21:43:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:16:57.789 21:43:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:57.789 21:43:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:57.789 21:43:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:57.789 21:43:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:16:57.789 21:43:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:57.789 21:43:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.789 21:43:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.789 21:43:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.789 21:43:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:16:57.789 21:43:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:16:57.789 21:43:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:57.789 21:43:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.790 21:43:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.790 [2024-12-10 21:43:58.491401] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:57.790 [2024-12-10 21:43:58.491490] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:57.790 [2024-12-10 21:43:58.491515] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:16:57.790 [2024-12-10 21:43:58.491526] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:57.790 [2024-12-10 21:43:58.494077] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:57.790 [2024-12-10 21:43:58.494127] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:57.790 [2024-12-10 21:43:58.494228] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:57.790 [2024-12-10 21:43:58.494295] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:57.790 pt1 00:16:57.790 21:43:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.790 21:43:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:16:57.790 21:43:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:57.790 21:43:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:57.790 21:43:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:57.790 21:43:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:57.790 21:43:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:57.790 21:43:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:57.790 21:43:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:57.790 21:43:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:57.790 21:43:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:57.790 21:43:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:57.790 21:43:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.790 21:43:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.790 21:43:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:57.790 21:43:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.790 21:43:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:57.790 "name": "raid_bdev1", 00:16:57.790 "uuid": "d693530d-5fdf-415f-8ae8-b74e86c00791", 00:16:57.790 "strip_size_kb": 64, 00:16:57.790 "state": "configuring", 00:16:57.790 "raid_level": "raid5f", 00:16:57.790 "superblock": true, 00:16:57.790 "num_base_bdevs": 3, 00:16:57.790 "num_base_bdevs_discovered": 1, 00:16:57.790 "num_base_bdevs_operational": 3, 00:16:57.790 "base_bdevs_list": [ 00:16:57.790 { 00:16:57.790 "name": "pt1", 00:16:57.790 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:57.790 "is_configured": true, 00:16:57.790 "data_offset": 2048, 00:16:57.790 "data_size": 63488 00:16:57.790 }, 00:16:57.790 { 00:16:57.790 "name": null, 00:16:57.790 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:57.790 "is_configured": false, 00:16:57.790 "data_offset": 2048, 00:16:57.790 "data_size": 63488 00:16:57.790 }, 00:16:57.790 { 00:16:57.790 "name": null, 00:16:57.790 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:57.790 "is_configured": false, 00:16:57.790 "data_offset": 2048, 00:16:57.790 "data_size": 63488 00:16:57.790 } 00:16:57.790 ] 00:16:57.790 }' 00:16:57.790 21:43:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:57.790 21:43:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.388 21:43:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:16:58.388 21:43:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:58.388 21:43:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.388 21:43:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.388 [2024-12-10 21:43:58.994591] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:58.388 [2024-12-10 21:43:58.994722] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:58.388 [2024-12-10 21:43:58.994769] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:16:58.388 [2024-12-10 21:43:58.994821] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:58.388 [2024-12-10 21:43:58.995351] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:58.388 [2024-12-10 21:43:58.995442] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:58.388 [2024-12-10 21:43:58.995583] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:58.388 [2024-12-10 21:43:58.995650] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:58.388 pt2 00:16:58.388 21:43:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.388 21:43:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:16:58.388 21:43:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.388 21:43:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.388 [2024-12-10 21:43:59.002590] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:16:58.388 21:43:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.388 21:43:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:16:58.388 21:43:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:58.388 21:43:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:58.388 21:43:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:58.388 21:43:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:58.388 21:43:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:58.388 21:43:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:58.388 21:43:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:58.388 21:43:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:58.388 21:43:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:58.388 21:43:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:58.388 21:43:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:58.388 21:43:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.388 21:43:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.388 21:43:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.388 21:43:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:58.388 "name": "raid_bdev1", 00:16:58.388 "uuid": "d693530d-5fdf-415f-8ae8-b74e86c00791", 00:16:58.388 "strip_size_kb": 64, 00:16:58.388 "state": "configuring", 00:16:58.388 "raid_level": "raid5f", 00:16:58.388 "superblock": true, 00:16:58.388 "num_base_bdevs": 3, 00:16:58.388 "num_base_bdevs_discovered": 1, 00:16:58.388 "num_base_bdevs_operational": 3, 00:16:58.388 "base_bdevs_list": [ 00:16:58.388 { 00:16:58.388 "name": "pt1", 00:16:58.388 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:58.388 "is_configured": true, 00:16:58.388 "data_offset": 2048, 00:16:58.388 "data_size": 63488 00:16:58.388 }, 00:16:58.388 { 00:16:58.388 "name": null, 00:16:58.388 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:58.388 "is_configured": false, 00:16:58.388 "data_offset": 0, 00:16:58.388 "data_size": 63488 00:16:58.388 }, 00:16:58.388 { 00:16:58.388 "name": null, 00:16:58.388 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:58.388 "is_configured": false, 00:16:58.388 "data_offset": 2048, 00:16:58.388 "data_size": 63488 00:16:58.388 } 00:16:58.388 ] 00:16:58.388 }' 00:16:58.388 21:43:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:58.388 21:43:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.970 21:43:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:16:58.970 21:43:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:58.970 21:43:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:58.970 21:43:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.970 21:43:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.970 [2024-12-10 21:43:59.513699] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:58.970 [2024-12-10 21:43:59.513779] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:58.970 [2024-12-10 21:43:59.513799] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:16:58.970 [2024-12-10 21:43:59.513812] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:58.970 [2024-12-10 21:43:59.514349] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:58.970 [2024-12-10 21:43:59.514372] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:58.970 [2024-12-10 21:43:59.514476] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:58.970 [2024-12-10 21:43:59.514521] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:58.970 pt2 00:16:58.970 21:43:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.970 21:43:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:58.970 21:43:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:58.970 21:43:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:58.970 21:43:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.970 21:43:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.970 [2024-12-10 21:43:59.525655] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:58.970 [2024-12-10 21:43:59.525763] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:58.970 [2024-12-10 21:43:59.525784] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:16:58.970 [2024-12-10 21:43:59.525796] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:58.970 [2024-12-10 21:43:59.526282] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:58.970 [2024-12-10 21:43:59.526307] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:58.970 [2024-12-10 21:43:59.526377] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:16:58.970 [2024-12-10 21:43:59.526401] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:58.970 [2024-12-10 21:43:59.526564] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:16:58.970 [2024-12-10 21:43:59.526583] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:16:58.970 [2024-12-10 21:43:59.526840] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:16:58.970 [2024-12-10 21:43:59.533284] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:16:58.970 [2024-12-10 21:43:59.533347] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:16:58.970 [2024-12-10 21:43:59.533592] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:58.970 pt3 00:16:58.970 21:43:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.970 21:43:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:58.970 21:43:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:58.970 21:43:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:58.970 21:43:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:58.970 21:43:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:58.970 21:43:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:58.970 21:43:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:58.970 21:43:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:58.970 21:43:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:58.970 21:43:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:58.970 21:43:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:58.970 21:43:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:58.970 21:43:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:58.970 21:43:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:58.970 21:43:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.970 21:43:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.970 21:43:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.970 21:43:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:58.970 "name": "raid_bdev1", 00:16:58.970 "uuid": "d693530d-5fdf-415f-8ae8-b74e86c00791", 00:16:58.970 "strip_size_kb": 64, 00:16:58.970 "state": "online", 00:16:58.970 "raid_level": "raid5f", 00:16:58.970 "superblock": true, 00:16:58.970 "num_base_bdevs": 3, 00:16:58.970 "num_base_bdevs_discovered": 3, 00:16:58.970 "num_base_bdevs_operational": 3, 00:16:58.970 "base_bdevs_list": [ 00:16:58.970 { 00:16:58.970 "name": "pt1", 00:16:58.970 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:58.970 "is_configured": true, 00:16:58.970 "data_offset": 2048, 00:16:58.970 "data_size": 63488 00:16:58.970 }, 00:16:58.970 { 00:16:58.970 "name": "pt2", 00:16:58.970 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:58.970 "is_configured": true, 00:16:58.970 "data_offset": 2048, 00:16:58.970 "data_size": 63488 00:16:58.970 }, 00:16:58.970 { 00:16:58.970 "name": "pt3", 00:16:58.970 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:58.970 "is_configured": true, 00:16:58.970 "data_offset": 2048, 00:16:58.970 "data_size": 63488 00:16:58.970 } 00:16:58.970 ] 00:16:58.970 }' 00:16:58.970 21:43:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:58.970 21:43:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:59.230 21:43:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:16:59.230 21:43:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:16:59.230 21:43:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:59.230 21:43:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:59.230 21:43:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:59.230 21:43:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:59.230 21:43:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:59.230 21:43:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:59.231 21:43:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.231 21:43:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:59.231 [2024-12-10 21:43:59.948711] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:59.231 21:43:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.231 21:43:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:59.231 "name": "raid_bdev1", 00:16:59.231 "aliases": [ 00:16:59.231 "d693530d-5fdf-415f-8ae8-b74e86c00791" 00:16:59.231 ], 00:16:59.231 "product_name": "Raid Volume", 00:16:59.231 "block_size": 512, 00:16:59.231 "num_blocks": 126976, 00:16:59.231 "uuid": "d693530d-5fdf-415f-8ae8-b74e86c00791", 00:16:59.231 "assigned_rate_limits": { 00:16:59.231 "rw_ios_per_sec": 0, 00:16:59.231 "rw_mbytes_per_sec": 0, 00:16:59.231 "r_mbytes_per_sec": 0, 00:16:59.231 "w_mbytes_per_sec": 0 00:16:59.231 }, 00:16:59.231 "claimed": false, 00:16:59.231 "zoned": false, 00:16:59.231 "supported_io_types": { 00:16:59.231 "read": true, 00:16:59.231 "write": true, 00:16:59.231 "unmap": false, 00:16:59.231 "flush": false, 00:16:59.231 "reset": true, 00:16:59.231 "nvme_admin": false, 00:16:59.231 "nvme_io": false, 00:16:59.231 "nvme_io_md": false, 00:16:59.231 "write_zeroes": true, 00:16:59.231 "zcopy": false, 00:16:59.231 "get_zone_info": false, 00:16:59.231 "zone_management": false, 00:16:59.231 "zone_append": false, 00:16:59.231 "compare": false, 00:16:59.231 "compare_and_write": false, 00:16:59.231 "abort": false, 00:16:59.231 "seek_hole": false, 00:16:59.231 "seek_data": false, 00:16:59.231 "copy": false, 00:16:59.231 "nvme_iov_md": false 00:16:59.231 }, 00:16:59.231 "driver_specific": { 00:16:59.231 "raid": { 00:16:59.231 "uuid": "d693530d-5fdf-415f-8ae8-b74e86c00791", 00:16:59.231 "strip_size_kb": 64, 00:16:59.231 "state": "online", 00:16:59.231 "raid_level": "raid5f", 00:16:59.231 "superblock": true, 00:16:59.231 "num_base_bdevs": 3, 00:16:59.231 "num_base_bdevs_discovered": 3, 00:16:59.231 "num_base_bdevs_operational": 3, 00:16:59.231 "base_bdevs_list": [ 00:16:59.231 { 00:16:59.231 "name": "pt1", 00:16:59.231 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:59.231 "is_configured": true, 00:16:59.231 "data_offset": 2048, 00:16:59.231 "data_size": 63488 00:16:59.231 }, 00:16:59.231 { 00:16:59.231 "name": "pt2", 00:16:59.231 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:59.231 "is_configured": true, 00:16:59.231 "data_offset": 2048, 00:16:59.231 "data_size": 63488 00:16:59.231 }, 00:16:59.231 { 00:16:59.231 "name": "pt3", 00:16:59.231 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:59.231 "is_configured": true, 00:16:59.231 "data_offset": 2048, 00:16:59.231 "data_size": 63488 00:16:59.231 } 00:16:59.231 ] 00:16:59.231 } 00:16:59.231 } 00:16:59.231 }' 00:16:59.231 21:43:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:59.491 21:44:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:16:59.491 pt2 00:16:59.491 pt3' 00:16:59.491 21:44:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:59.491 21:44:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:59.491 21:44:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:59.491 21:44:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:59.491 21:44:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:16:59.491 21:44:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.491 21:44:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:59.491 21:44:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.491 21:44:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:59.491 21:44:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:59.491 21:44:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:59.491 21:44:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:16:59.491 21:44:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:59.491 21:44:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.491 21:44:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:59.491 21:44:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.491 21:44:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:59.491 21:44:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:59.491 21:44:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:59.491 21:44:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:59.491 21:44:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:16:59.491 21:44:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.491 21:44:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:59.491 21:44:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.491 21:44:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:59.491 21:44:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:59.491 21:44:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:16:59.491 21:44:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:59.491 21:44:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.491 21:44:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:59.491 [2024-12-10 21:44:00.236317] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:59.491 21:44:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.750 21:44:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' d693530d-5fdf-415f-8ae8-b74e86c00791 '!=' d693530d-5fdf-415f-8ae8-b74e86c00791 ']' 00:16:59.750 21:44:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:16:59.750 21:44:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:59.750 21:44:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:16:59.750 21:44:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:16:59.750 21:44:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.750 21:44:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:59.750 [2024-12-10 21:44:00.284047] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:16:59.750 21:44:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.750 21:44:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:59.750 21:44:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:59.750 21:44:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:59.750 21:44:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:59.750 21:44:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:59.750 21:44:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:59.750 21:44:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:59.750 21:44:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:59.750 21:44:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:59.750 21:44:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:59.750 21:44:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:59.750 21:44:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:59.750 21:44:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.750 21:44:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:59.750 21:44:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.750 21:44:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:59.750 "name": "raid_bdev1", 00:16:59.750 "uuid": "d693530d-5fdf-415f-8ae8-b74e86c00791", 00:16:59.750 "strip_size_kb": 64, 00:16:59.750 "state": "online", 00:16:59.750 "raid_level": "raid5f", 00:16:59.750 "superblock": true, 00:16:59.750 "num_base_bdevs": 3, 00:16:59.750 "num_base_bdevs_discovered": 2, 00:16:59.750 "num_base_bdevs_operational": 2, 00:16:59.750 "base_bdevs_list": [ 00:16:59.750 { 00:16:59.750 "name": null, 00:16:59.750 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:59.750 "is_configured": false, 00:16:59.750 "data_offset": 0, 00:16:59.750 "data_size": 63488 00:16:59.750 }, 00:16:59.750 { 00:16:59.750 "name": "pt2", 00:16:59.750 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:59.750 "is_configured": true, 00:16:59.750 "data_offset": 2048, 00:16:59.750 "data_size": 63488 00:16:59.750 }, 00:16:59.750 { 00:16:59.750 "name": "pt3", 00:16:59.750 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:59.750 "is_configured": true, 00:16:59.750 "data_offset": 2048, 00:16:59.750 "data_size": 63488 00:16:59.750 } 00:16:59.750 ] 00:16:59.750 }' 00:16:59.750 21:44:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:59.750 21:44:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:00.010 21:44:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:00.010 21:44:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.010 21:44:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:00.010 [2024-12-10 21:44:00.767205] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:00.010 [2024-12-10 21:44:00.767301] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:00.010 [2024-12-10 21:44:00.767409] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:00.010 [2024-12-10 21:44:00.767508] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:00.010 [2024-12-10 21:44:00.767601] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:17:00.010 21:44:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.010 21:44:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:00.010 21:44:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:17:00.010 21:44:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.010 21:44:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:00.010 21:44:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.268 21:44:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:17:00.268 21:44:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:17:00.268 21:44:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:17:00.268 21:44:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:00.268 21:44:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:17:00.268 21:44:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.268 21:44:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:00.268 21:44:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.268 21:44:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:17:00.268 21:44:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:00.268 21:44:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:17:00.268 21:44:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.268 21:44:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:00.268 21:44:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.268 21:44:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:17:00.268 21:44:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:00.268 21:44:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:17:00.268 21:44:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:17:00.268 21:44:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:00.268 21:44:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.268 21:44:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:00.268 [2024-12-10 21:44:00.855023] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:00.268 [2024-12-10 21:44:00.855088] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:00.268 [2024-12-10 21:44:00.855107] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:17:00.268 [2024-12-10 21:44:00.855118] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:00.268 [2024-12-10 21:44:00.857454] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:00.268 [2024-12-10 21:44:00.857561] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:00.268 [2024-12-10 21:44:00.857661] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:00.268 [2024-12-10 21:44:00.857719] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:00.268 pt2 00:17:00.268 21:44:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.268 21:44:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:17:00.268 21:44:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:00.268 21:44:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:00.268 21:44:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:00.268 21:44:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:00.268 21:44:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:00.268 21:44:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:00.268 21:44:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:00.268 21:44:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:00.268 21:44:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:00.268 21:44:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:00.268 21:44:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:00.268 21:44:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.268 21:44:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:00.268 21:44:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.268 21:44:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:00.268 "name": "raid_bdev1", 00:17:00.268 "uuid": "d693530d-5fdf-415f-8ae8-b74e86c00791", 00:17:00.268 "strip_size_kb": 64, 00:17:00.268 "state": "configuring", 00:17:00.268 "raid_level": "raid5f", 00:17:00.268 "superblock": true, 00:17:00.268 "num_base_bdevs": 3, 00:17:00.268 "num_base_bdevs_discovered": 1, 00:17:00.268 "num_base_bdevs_operational": 2, 00:17:00.268 "base_bdevs_list": [ 00:17:00.268 { 00:17:00.268 "name": null, 00:17:00.268 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:00.268 "is_configured": false, 00:17:00.268 "data_offset": 2048, 00:17:00.268 "data_size": 63488 00:17:00.268 }, 00:17:00.268 { 00:17:00.268 "name": "pt2", 00:17:00.268 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:00.268 "is_configured": true, 00:17:00.268 "data_offset": 2048, 00:17:00.268 "data_size": 63488 00:17:00.268 }, 00:17:00.268 { 00:17:00.268 "name": null, 00:17:00.268 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:00.268 "is_configured": false, 00:17:00.268 "data_offset": 2048, 00:17:00.268 "data_size": 63488 00:17:00.269 } 00:17:00.269 ] 00:17:00.269 }' 00:17:00.269 21:44:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:00.269 21:44:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:00.528 21:44:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:17:00.528 21:44:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:17:00.528 21:44:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:17:00.528 21:44:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:00.528 21:44:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.528 21:44:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:00.528 [2024-12-10 21:44:01.294390] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:00.528 [2024-12-10 21:44:01.294512] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:00.528 [2024-12-10 21:44:01.294551] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:17:00.528 [2024-12-10 21:44:01.294570] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:00.528 [2024-12-10 21:44:01.295247] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:00.528 [2024-12-10 21:44:01.295308] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:00.528 [2024-12-10 21:44:01.295448] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:17:00.528 [2024-12-10 21:44:01.295495] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:00.528 [2024-12-10 21:44:01.295663] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:17:00.528 [2024-12-10 21:44:01.295691] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:17:00.528 [2024-12-10 21:44:01.296073] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:00.528 [2024-12-10 21:44:01.302724] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:17:00.528 [2024-12-10 21:44:01.302756] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:17:00.528 [2024-12-10 21:44:01.303177] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:00.528 pt3 00:17:00.528 21:44:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.528 21:44:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:17:00.528 21:44:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:00.528 21:44:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:00.528 21:44:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:00.528 21:44:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:00.528 21:44:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:00.528 21:44:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:00.528 21:44:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:00.528 21:44:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:00.528 21:44:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:00.790 21:44:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:00.790 21:44:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.790 21:44:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:00.790 21:44:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:00.790 21:44:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.790 21:44:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:00.790 "name": "raid_bdev1", 00:17:00.790 "uuid": "d693530d-5fdf-415f-8ae8-b74e86c00791", 00:17:00.790 "strip_size_kb": 64, 00:17:00.790 "state": "online", 00:17:00.790 "raid_level": "raid5f", 00:17:00.790 "superblock": true, 00:17:00.790 "num_base_bdevs": 3, 00:17:00.790 "num_base_bdevs_discovered": 2, 00:17:00.790 "num_base_bdevs_operational": 2, 00:17:00.790 "base_bdevs_list": [ 00:17:00.790 { 00:17:00.790 "name": null, 00:17:00.790 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:00.790 "is_configured": false, 00:17:00.790 "data_offset": 2048, 00:17:00.790 "data_size": 63488 00:17:00.790 }, 00:17:00.790 { 00:17:00.790 "name": "pt2", 00:17:00.790 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:00.790 "is_configured": true, 00:17:00.790 "data_offset": 2048, 00:17:00.790 "data_size": 63488 00:17:00.790 }, 00:17:00.790 { 00:17:00.790 "name": "pt3", 00:17:00.790 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:00.790 "is_configured": true, 00:17:00.790 "data_offset": 2048, 00:17:00.790 "data_size": 63488 00:17:00.790 } 00:17:00.790 ] 00:17:00.790 }' 00:17:00.790 21:44:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:00.790 21:44:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:01.049 21:44:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:01.049 21:44:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.049 21:44:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:01.049 [2024-12-10 21:44:01.734389] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:01.050 [2024-12-10 21:44:01.734522] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:01.050 [2024-12-10 21:44:01.734671] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:01.050 [2024-12-10 21:44:01.734790] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:01.050 [2024-12-10 21:44:01.734850] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:17:01.050 21:44:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.050 21:44:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:01.050 21:44:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:17:01.050 21:44:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.050 21:44:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:01.050 21:44:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.050 21:44:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:17:01.050 21:44:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:17:01.050 21:44:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:17:01.050 21:44:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:17:01.050 21:44:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:17:01.050 21:44:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.050 21:44:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:01.050 21:44:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.050 21:44:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:01.050 21:44:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.050 21:44:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:01.050 [2024-12-10 21:44:01.810288] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:01.050 [2024-12-10 21:44:01.810364] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:01.050 [2024-12-10 21:44:01.810388] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:17:01.050 [2024-12-10 21:44:01.810399] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:01.050 [2024-12-10 21:44:01.813236] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:01.050 [2024-12-10 21:44:01.813351] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:01.050 [2024-12-10 21:44:01.813487] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:01.050 [2024-12-10 21:44:01.813549] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:01.050 [2024-12-10 21:44:01.813750] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:17:01.050 [2024-12-10 21:44:01.813766] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:01.050 [2024-12-10 21:44:01.813787] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:17:01.050 [2024-12-10 21:44:01.813874] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:01.050 pt1 00:17:01.050 21:44:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.050 21:44:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:17:01.050 21:44:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:17:01.050 21:44:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:01.050 21:44:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:01.050 21:44:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:01.050 21:44:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:01.050 21:44:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:01.050 21:44:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:01.050 21:44:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:01.050 21:44:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:01.050 21:44:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:01.050 21:44:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:01.050 21:44:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:01.050 21:44:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.050 21:44:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:01.310 21:44:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.310 21:44:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:01.310 "name": "raid_bdev1", 00:17:01.310 "uuid": "d693530d-5fdf-415f-8ae8-b74e86c00791", 00:17:01.310 "strip_size_kb": 64, 00:17:01.310 "state": "configuring", 00:17:01.310 "raid_level": "raid5f", 00:17:01.310 "superblock": true, 00:17:01.310 "num_base_bdevs": 3, 00:17:01.310 "num_base_bdevs_discovered": 1, 00:17:01.310 "num_base_bdevs_operational": 2, 00:17:01.310 "base_bdevs_list": [ 00:17:01.310 { 00:17:01.310 "name": null, 00:17:01.310 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:01.310 "is_configured": false, 00:17:01.310 "data_offset": 2048, 00:17:01.310 "data_size": 63488 00:17:01.310 }, 00:17:01.310 { 00:17:01.310 "name": "pt2", 00:17:01.310 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:01.310 "is_configured": true, 00:17:01.310 "data_offset": 2048, 00:17:01.310 "data_size": 63488 00:17:01.310 }, 00:17:01.310 { 00:17:01.310 "name": null, 00:17:01.310 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:01.310 "is_configured": false, 00:17:01.310 "data_offset": 2048, 00:17:01.310 "data_size": 63488 00:17:01.310 } 00:17:01.310 ] 00:17:01.310 }' 00:17:01.310 21:44:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:01.310 21:44:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:01.570 21:44:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:17:01.570 21:44:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.570 21:44:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:01.570 21:44:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:17:01.570 21:44:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.570 21:44:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:17:01.570 21:44:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:01.570 21:44:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.570 21:44:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:01.570 [2024-12-10 21:44:02.321513] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:01.570 [2024-12-10 21:44:02.321646] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:01.570 [2024-12-10 21:44:02.321700] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:17:01.570 [2024-12-10 21:44:02.321741] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:01.570 [2024-12-10 21:44:02.322351] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:01.570 [2024-12-10 21:44:02.322441] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:01.570 [2024-12-10 21:44:02.322592] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:17:01.570 [2024-12-10 21:44:02.322658] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:01.570 [2024-12-10 21:44:02.322860] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:17:01.570 [2024-12-10 21:44:02.322910] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:17:01.570 [2024-12-10 21:44:02.323270] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:17:01.570 [2024-12-10 21:44:02.331156] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:17:01.570 [2024-12-10 21:44:02.331241] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:17:01.570 [2024-12-10 21:44:02.331645] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:01.570 pt3 00:17:01.570 21:44:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.570 21:44:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:17:01.570 21:44:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:01.570 21:44:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:01.570 21:44:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:01.570 21:44:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:01.570 21:44:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:01.570 21:44:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:01.570 21:44:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:01.570 21:44:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:01.570 21:44:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:01.570 21:44:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:01.570 21:44:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:01.570 21:44:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.570 21:44:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:01.828 21:44:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.828 21:44:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:01.828 "name": "raid_bdev1", 00:17:01.828 "uuid": "d693530d-5fdf-415f-8ae8-b74e86c00791", 00:17:01.828 "strip_size_kb": 64, 00:17:01.828 "state": "online", 00:17:01.828 "raid_level": "raid5f", 00:17:01.828 "superblock": true, 00:17:01.828 "num_base_bdevs": 3, 00:17:01.828 "num_base_bdevs_discovered": 2, 00:17:01.828 "num_base_bdevs_operational": 2, 00:17:01.828 "base_bdevs_list": [ 00:17:01.828 { 00:17:01.828 "name": null, 00:17:01.828 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:01.828 "is_configured": false, 00:17:01.828 "data_offset": 2048, 00:17:01.828 "data_size": 63488 00:17:01.828 }, 00:17:01.828 { 00:17:01.828 "name": "pt2", 00:17:01.828 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:01.828 "is_configured": true, 00:17:01.828 "data_offset": 2048, 00:17:01.828 "data_size": 63488 00:17:01.828 }, 00:17:01.828 { 00:17:01.828 "name": "pt3", 00:17:01.828 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:01.828 "is_configured": true, 00:17:01.828 "data_offset": 2048, 00:17:01.828 "data_size": 63488 00:17:01.828 } 00:17:01.828 ] 00:17:01.828 }' 00:17:01.828 21:44:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:01.828 21:44:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:02.087 21:44:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:17:02.087 21:44:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:17:02.088 21:44:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.088 21:44:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:02.088 21:44:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.088 21:44:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:17:02.088 21:44:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:02.088 21:44:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:17:02.088 21:44:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.088 21:44:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:02.088 [2024-12-10 21:44:02.860167] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:02.347 21:44:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.347 21:44:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' d693530d-5fdf-415f-8ae8-b74e86c00791 '!=' d693530d-5fdf-415f-8ae8-b74e86c00791 ']' 00:17:02.347 21:44:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 81309 00:17:02.347 21:44:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 81309 ']' 00:17:02.347 21:44:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # kill -0 81309 00:17:02.347 21:44:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # uname 00:17:02.347 21:44:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:02.347 21:44:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81309 00:17:02.347 21:44:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:02.347 killing process with pid 81309 00:17:02.347 21:44:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:02.347 21:44:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81309' 00:17:02.347 21:44:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@973 -- # kill 81309 00:17:02.347 [2024-12-10 21:44:02.954466] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:02.347 [2024-12-10 21:44:02.954572] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:02.347 [2024-12-10 21:44:02.954644] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:02.347 [2024-12-10 21:44:02.954658] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:17:02.347 21:44:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@978 -- # wait 81309 00:17:02.607 [2024-12-10 21:44:03.295023] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:03.984 ************************************ 00:17:03.984 END TEST raid5f_superblock_test 00:17:03.984 ************************************ 00:17:03.984 21:44:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:17:03.984 00:17:03.984 real 0m8.279s 00:17:03.984 user 0m12.886s 00:17:03.984 sys 0m1.537s 00:17:03.984 21:44:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:03.984 21:44:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:03.984 21:44:04 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:17:03.984 21:44:04 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 3 false false true 00:17:03.984 21:44:04 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:17:03.984 21:44:04 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:03.984 21:44:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:03.984 ************************************ 00:17:03.984 START TEST raid5f_rebuild_test 00:17:03.984 ************************************ 00:17:03.984 21:44:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 3 false false true 00:17:03.984 21:44:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:17:03.984 21:44:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:17:03.984 21:44:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:17:03.984 21:44:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:17:03.984 21:44:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:17:03.984 21:44:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:17:03.984 21:44:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:03.984 21:44:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:17:03.984 21:44:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:03.984 21:44:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:03.984 21:44:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:17:03.984 21:44:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:03.984 21:44:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:03.984 21:44:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:17:03.984 21:44:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:03.984 21:44:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:03.984 21:44:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:17:03.984 21:44:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:17:03.984 21:44:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:17:03.984 21:44:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:17:03.984 21:44:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:17:03.984 21:44:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:17:03.984 21:44:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:17:03.984 21:44:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:17:03.984 21:44:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:17:03.984 21:44:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:17:03.984 21:44:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:17:03.984 21:44:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:17:03.984 21:44:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=81753 00:17:03.984 21:44:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:17:03.984 21:44:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 81753 00:17:03.984 21:44:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 81753 ']' 00:17:03.984 21:44:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:03.984 21:44:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:03.984 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:03.984 21:44:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:03.984 21:44:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:03.984 21:44:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:03.984 [2024-12-10 21:44:04.726859] Starting SPDK v25.01-pre git sha1 cec5ba284 / DPDK 24.03.0 initialization... 00:17:03.984 I/O size of 3145728 is greater than zero copy threshold (65536). 00:17:03.984 Zero copy mechanism will not be used. 00:17:03.984 [2024-12-10 21:44:04.727079] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81753 ] 00:17:04.244 [2024-12-10 21:44:04.902184] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:04.504 [2024-12-10 21:44:05.026304] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:17:04.504 [2024-12-10 21:44:05.244519] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:04.504 [2024-12-10 21:44:05.244586] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:05.073 21:44:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:05.073 21:44:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:17:05.073 21:44:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:05.073 21:44:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:17:05.073 21:44:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.073 21:44:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:05.073 BaseBdev1_malloc 00:17:05.073 21:44:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.073 21:44:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:05.073 21:44:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.073 21:44:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:05.073 [2024-12-10 21:44:05.645155] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:05.073 [2024-12-10 21:44:05.645235] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:05.073 [2024-12-10 21:44:05.645259] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:05.073 [2024-12-10 21:44:05.645271] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:05.073 [2024-12-10 21:44:05.647378] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:05.073 [2024-12-10 21:44:05.647430] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:05.073 BaseBdev1 00:17:05.073 21:44:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.073 21:44:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:05.073 21:44:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:17:05.073 21:44:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.073 21:44:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:05.073 BaseBdev2_malloc 00:17:05.073 21:44:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.073 21:44:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:17:05.073 21:44:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.073 21:44:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:05.073 [2024-12-10 21:44:05.701268] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:17:05.073 [2024-12-10 21:44:05.701397] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:05.073 [2024-12-10 21:44:05.701442] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:05.073 [2024-12-10 21:44:05.701456] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:05.073 [2024-12-10 21:44:05.703738] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:05.073 [2024-12-10 21:44:05.703782] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:05.073 BaseBdev2 00:17:05.073 21:44:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.073 21:44:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:05.073 21:44:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:17:05.073 21:44:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.073 21:44:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:05.073 BaseBdev3_malloc 00:17:05.073 21:44:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.073 21:44:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:17:05.073 21:44:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.073 21:44:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:05.073 [2024-12-10 21:44:05.765035] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:17:05.073 [2024-12-10 21:44:05.765099] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:05.073 [2024-12-10 21:44:05.765123] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:17:05.073 [2024-12-10 21:44:05.765133] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:05.073 [2024-12-10 21:44:05.767230] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:05.073 [2024-12-10 21:44:05.767317] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:17:05.073 BaseBdev3 00:17:05.073 21:44:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.073 21:44:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:17:05.073 21:44:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.073 21:44:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:05.073 spare_malloc 00:17:05.073 21:44:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.073 21:44:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:17:05.073 21:44:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.073 21:44:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:05.073 spare_delay 00:17:05.073 21:44:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.073 21:44:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:05.073 21:44:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.073 21:44:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:05.073 [2024-12-10 21:44:05.837243] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:05.073 [2024-12-10 21:44:05.837306] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:05.073 [2024-12-10 21:44:05.837328] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:17:05.073 [2024-12-10 21:44:05.837339] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:05.073 [2024-12-10 21:44:05.839600] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:05.073 [2024-12-10 21:44:05.839647] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:05.073 spare 00:17:05.073 21:44:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.073 21:44:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:17:05.073 21:44:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.073 21:44:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:05.073 [2024-12-10 21:44:05.849264] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:05.073 [2024-12-10 21:44:05.851115] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:05.073 [2024-12-10 21:44:05.851234] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:05.073 [2024-12-10 21:44:05.851328] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:05.073 [2024-12-10 21:44:05.851340] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:17:05.073 [2024-12-10 21:44:05.851647] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:17:05.337 [2024-12-10 21:44:05.857774] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:05.337 [2024-12-10 21:44:05.857846] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:05.337 [2024-12-10 21:44:05.858041] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:05.337 21:44:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.337 21:44:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:05.337 21:44:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:05.337 21:44:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:05.337 21:44:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:05.337 21:44:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:05.337 21:44:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:05.337 21:44:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:05.337 21:44:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:05.337 21:44:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:05.337 21:44:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:05.337 21:44:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:05.337 21:44:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:05.337 21:44:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.337 21:44:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:05.337 21:44:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.337 21:44:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:05.337 "name": "raid_bdev1", 00:17:05.337 "uuid": "717bba7c-174a-4da5-bca2-6039de9d9193", 00:17:05.337 "strip_size_kb": 64, 00:17:05.337 "state": "online", 00:17:05.337 "raid_level": "raid5f", 00:17:05.337 "superblock": false, 00:17:05.337 "num_base_bdevs": 3, 00:17:05.337 "num_base_bdevs_discovered": 3, 00:17:05.337 "num_base_bdevs_operational": 3, 00:17:05.337 "base_bdevs_list": [ 00:17:05.337 { 00:17:05.337 "name": "BaseBdev1", 00:17:05.337 "uuid": "7238d195-79ed-567b-9aa6-8e85cebab85a", 00:17:05.337 "is_configured": true, 00:17:05.337 "data_offset": 0, 00:17:05.337 "data_size": 65536 00:17:05.337 }, 00:17:05.337 { 00:17:05.337 "name": "BaseBdev2", 00:17:05.337 "uuid": "d8a06694-70d7-5201-b06e-3ead41842ff0", 00:17:05.337 "is_configured": true, 00:17:05.337 "data_offset": 0, 00:17:05.337 "data_size": 65536 00:17:05.337 }, 00:17:05.337 { 00:17:05.337 "name": "BaseBdev3", 00:17:05.337 "uuid": "64bdc1e3-382e-5091-b4b0-929624105026", 00:17:05.337 "is_configured": true, 00:17:05.337 "data_offset": 0, 00:17:05.337 "data_size": 65536 00:17:05.337 } 00:17:05.337 ] 00:17:05.337 }' 00:17:05.337 21:44:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:05.337 21:44:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:05.602 21:44:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:17:05.602 21:44:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:05.602 21:44:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.602 21:44:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:05.602 [2024-12-10 21:44:06.276652] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:05.602 21:44:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.602 21:44:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=131072 00:17:05.602 21:44:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:17:05.602 21:44:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:05.602 21:44:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.602 21:44:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:05.602 21:44:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.602 21:44:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:17:05.602 21:44:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:17:05.602 21:44:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:17:05.602 21:44:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:17:05.602 21:44:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:17:05.602 21:44:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:05.602 21:44:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:17:05.602 21:44:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:05.602 21:44:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:17:05.602 21:44:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:05.602 21:44:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:17:05.602 21:44:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:05.602 21:44:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:05.602 21:44:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:17:05.862 [2024-12-10 21:44:06.524121] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:17:05.862 /dev/nbd0 00:17:05.862 21:44:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:05.862 21:44:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:05.862 21:44:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:05.862 21:44:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:17:05.862 21:44:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:05.862 21:44:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:05.862 21:44:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:05.862 21:44:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:17:05.862 21:44:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:05.862 21:44:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:05.862 21:44:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:05.862 1+0 records in 00:17:05.862 1+0 records out 00:17:05.862 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000509451 s, 8.0 MB/s 00:17:05.862 21:44:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:05.862 21:44:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:17:05.862 21:44:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:05.862 21:44:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:05.862 21:44:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:17:05.862 21:44:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:05.862 21:44:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:05.862 21:44:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:17:05.862 21:44:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:17:05.862 21:44:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 128 00:17:05.862 21:44:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=512 oflag=direct 00:17:06.431 512+0 records in 00:17:06.431 512+0 records out 00:17:06.431 67108864 bytes (67 MB, 64 MiB) copied, 0.398465 s, 168 MB/s 00:17:06.431 21:44:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:17:06.431 21:44:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:06.431 21:44:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:06.431 21:44:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:06.431 21:44:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:17:06.431 21:44:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:06.432 21:44:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:06.691 [2024-12-10 21:44:07.213680] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:06.691 21:44:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:06.691 21:44:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:06.691 21:44:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:06.691 21:44:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:06.691 21:44:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:06.691 21:44:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:06.691 21:44:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:17:06.691 21:44:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:17:06.691 21:44:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:17:06.691 21:44:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.691 21:44:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:06.691 [2024-12-10 21:44:07.258268] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:06.691 21:44:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.691 21:44:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:17:06.691 21:44:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:06.691 21:44:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:06.691 21:44:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:06.691 21:44:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:06.691 21:44:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:06.691 21:44:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:06.691 21:44:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:06.691 21:44:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:06.691 21:44:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:06.691 21:44:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:06.691 21:44:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:06.691 21:44:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.691 21:44:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:06.691 21:44:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.691 21:44:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:06.691 "name": "raid_bdev1", 00:17:06.691 "uuid": "717bba7c-174a-4da5-bca2-6039de9d9193", 00:17:06.691 "strip_size_kb": 64, 00:17:06.691 "state": "online", 00:17:06.691 "raid_level": "raid5f", 00:17:06.691 "superblock": false, 00:17:06.691 "num_base_bdevs": 3, 00:17:06.691 "num_base_bdevs_discovered": 2, 00:17:06.691 "num_base_bdevs_operational": 2, 00:17:06.691 "base_bdevs_list": [ 00:17:06.691 { 00:17:06.691 "name": null, 00:17:06.691 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:06.691 "is_configured": false, 00:17:06.691 "data_offset": 0, 00:17:06.691 "data_size": 65536 00:17:06.691 }, 00:17:06.691 { 00:17:06.691 "name": "BaseBdev2", 00:17:06.691 "uuid": "d8a06694-70d7-5201-b06e-3ead41842ff0", 00:17:06.691 "is_configured": true, 00:17:06.691 "data_offset": 0, 00:17:06.691 "data_size": 65536 00:17:06.691 }, 00:17:06.691 { 00:17:06.691 "name": "BaseBdev3", 00:17:06.691 "uuid": "64bdc1e3-382e-5091-b4b0-929624105026", 00:17:06.691 "is_configured": true, 00:17:06.691 "data_offset": 0, 00:17:06.691 "data_size": 65536 00:17:06.691 } 00:17:06.691 ] 00:17:06.691 }' 00:17:06.691 21:44:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:06.691 21:44:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:06.951 21:44:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:06.951 21:44:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.951 21:44:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:06.951 [2024-12-10 21:44:07.729524] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:07.211 [2024-12-10 21:44:07.747954] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b680 00:17:07.211 21:44:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.211 21:44:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:17:07.211 [2024-12-10 21:44:07.756696] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:08.149 21:44:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:08.149 21:44:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:08.149 21:44:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:08.149 21:44:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:08.149 21:44:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:08.149 21:44:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:08.149 21:44:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:08.149 21:44:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.149 21:44:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:08.149 21:44:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.149 21:44:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:08.149 "name": "raid_bdev1", 00:17:08.149 "uuid": "717bba7c-174a-4da5-bca2-6039de9d9193", 00:17:08.149 "strip_size_kb": 64, 00:17:08.149 "state": "online", 00:17:08.149 "raid_level": "raid5f", 00:17:08.149 "superblock": false, 00:17:08.149 "num_base_bdevs": 3, 00:17:08.149 "num_base_bdevs_discovered": 3, 00:17:08.149 "num_base_bdevs_operational": 3, 00:17:08.149 "process": { 00:17:08.149 "type": "rebuild", 00:17:08.149 "target": "spare", 00:17:08.149 "progress": { 00:17:08.149 "blocks": 20480, 00:17:08.149 "percent": 15 00:17:08.149 } 00:17:08.149 }, 00:17:08.149 "base_bdevs_list": [ 00:17:08.149 { 00:17:08.149 "name": "spare", 00:17:08.149 "uuid": "7a0fb4da-5ab1-5c6c-909e-3db660bceecf", 00:17:08.149 "is_configured": true, 00:17:08.149 "data_offset": 0, 00:17:08.149 "data_size": 65536 00:17:08.149 }, 00:17:08.149 { 00:17:08.150 "name": "BaseBdev2", 00:17:08.150 "uuid": "d8a06694-70d7-5201-b06e-3ead41842ff0", 00:17:08.150 "is_configured": true, 00:17:08.150 "data_offset": 0, 00:17:08.150 "data_size": 65536 00:17:08.150 }, 00:17:08.150 { 00:17:08.150 "name": "BaseBdev3", 00:17:08.150 "uuid": "64bdc1e3-382e-5091-b4b0-929624105026", 00:17:08.150 "is_configured": true, 00:17:08.150 "data_offset": 0, 00:17:08.150 "data_size": 65536 00:17:08.150 } 00:17:08.150 ] 00:17:08.150 }' 00:17:08.150 21:44:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:08.150 21:44:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:08.150 21:44:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:08.150 21:44:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:08.150 21:44:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:08.150 21:44:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.150 21:44:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:08.150 [2024-12-10 21:44:08.888058] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:08.408 [2024-12-10 21:44:08.968231] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:08.408 [2024-12-10 21:44:08.968426] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:08.408 [2024-12-10 21:44:08.968492] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:08.408 [2024-12-10 21:44:08.968535] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:08.408 21:44:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.408 21:44:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:17:08.408 21:44:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:08.408 21:44:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:08.408 21:44:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:08.408 21:44:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:08.408 21:44:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:08.408 21:44:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:08.408 21:44:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:08.408 21:44:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:08.408 21:44:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:08.408 21:44:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:08.408 21:44:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.408 21:44:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:08.408 21:44:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:08.408 21:44:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.408 21:44:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:08.408 "name": "raid_bdev1", 00:17:08.408 "uuid": "717bba7c-174a-4da5-bca2-6039de9d9193", 00:17:08.408 "strip_size_kb": 64, 00:17:08.408 "state": "online", 00:17:08.408 "raid_level": "raid5f", 00:17:08.408 "superblock": false, 00:17:08.408 "num_base_bdevs": 3, 00:17:08.408 "num_base_bdevs_discovered": 2, 00:17:08.408 "num_base_bdevs_operational": 2, 00:17:08.408 "base_bdevs_list": [ 00:17:08.408 { 00:17:08.408 "name": null, 00:17:08.408 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:08.408 "is_configured": false, 00:17:08.408 "data_offset": 0, 00:17:08.408 "data_size": 65536 00:17:08.408 }, 00:17:08.408 { 00:17:08.408 "name": "BaseBdev2", 00:17:08.408 "uuid": "d8a06694-70d7-5201-b06e-3ead41842ff0", 00:17:08.408 "is_configured": true, 00:17:08.408 "data_offset": 0, 00:17:08.408 "data_size": 65536 00:17:08.408 }, 00:17:08.408 { 00:17:08.408 "name": "BaseBdev3", 00:17:08.408 "uuid": "64bdc1e3-382e-5091-b4b0-929624105026", 00:17:08.408 "is_configured": true, 00:17:08.408 "data_offset": 0, 00:17:08.408 "data_size": 65536 00:17:08.408 } 00:17:08.408 ] 00:17:08.408 }' 00:17:08.408 21:44:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:08.408 21:44:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:08.975 21:44:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:08.975 21:44:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:08.975 21:44:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:08.975 21:44:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:08.975 21:44:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:08.975 21:44:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:08.975 21:44:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.975 21:44:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:08.975 21:44:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:08.975 21:44:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.975 21:44:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:08.975 "name": "raid_bdev1", 00:17:08.975 "uuid": "717bba7c-174a-4da5-bca2-6039de9d9193", 00:17:08.975 "strip_size_kb": 64, 00:17:08.975 "state": "online", 00:17:08.975 "raid_level": "raid5f", 00:17:08.975 "superblock": false, 00:17:08.975 "num_base_bdevs": 3, 00:17:08.975 "num_base_bdevs_discovered": 2, 00:17:08.975 "num_base_bdevs_operational": 2, 00:17:08.975 "base_bdevs_list": [ 00:17:08.975 { 00:17:08.975 "name": null, 00:17:08.975 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:08.975 "is_configured": false, 00:17:08.975 "data_offset": 0, 00:17:08.975 "data_size": 65536 00:17:08.975 }, 00:17:08.975 { 00:17:08.975 "name": "BaseBdev2", 00:17:08.975 "uuid": "d8a06694-70d7-5201-b06e-3ead41842ff0", 00:17:08.975 "is_configured": true, 00:17:08.975 "data_offset": 0, 00:17:08.975 "data_size": 65536 00:17:08.975 }, 00:17:08.975 { 00:17:08.975 "name": "BaseBdev3", 00:17:08.975 "uuid": "64bdc1e3-382e-5091-b4b0-929624105026", 00:17:08.975 "is_configured": true, 00:17:08.975 "data_offset": 0, 00:17:08.975 "data_size": 65536 00:17:08.975 } 00:17:08.975 ] 00:17:08.975 }' 00:17:08.975 21:44:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:08.975 21:44:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:08.975 21:44:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:08.975 21:44:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:08.975 21:44:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:08.975 21:44:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.975 21:44:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:08.975 [2024-12-10 21:44:09.631393] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:08.975 [2024-12-10 21:44:09.648490] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b750 00:17:08.975 21:44:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.975 21:44:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:17:08.975 [2024-12-10 21:44:09.656352] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:09.911 21:44:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:09.911 21:44:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:09.911 21:44:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:09.911 21:44:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:09.911 21:44:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:09.911 21:44:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:09.911 21:44:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:09.911 21:44:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.911 21:44:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:09.911 21:44:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.170 21:44:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:10.170 "name": "raid_bdev1", 00:17:10.170 "uuid": "717bba7c-174a-4da5-bca2-6039de9d9193", 00:17:10.170 "strip_size_kb": 64, 00:17:10.170 "state": "online", 00:17:10.170 "raid_level": "raid5f", 00:17:10.170 "superblock": false, 00:17:10.170 "num_base_bdevs": 3, 00:17:10.170 "num_base_bdevs_discovered": 3, 00:17:10.170 "num_base_bdevs_operational": 3, 00:17:10.170 "process": { 00:17:10.170 "type": "rebuild", 00:17:10.170 "target": "spare", 00:17:10.170 "progress": { 00:17:10.170 "blocks": 20480, 00:17:10.170 "percent": 15 00:17:10.170 } 00:17:10.170 }, 00:17:10.170 "base_bdevs_list": [ 00:17:10.170 { 00:17:10.170 "name": "spare", 00:17:10.170 "uuid": "7a0fb4da-5ab1-5c6c-909e-3db660bceecf", 00:17:10.170 "is_configured": true, 00:17:10.170 "data_offset": 0, 00:17:10.170 "data_size": 65536 00:17:10.170 }, 00:17:10.170 { 00:17:10.170 "name": "BaseBdev2", 00:17:10.170 "uuid": "d8a06694-70d7-5201-b06e-3ead41842ff0", 00:17:10.170 "is_configured": true, 00:17:10.170 "data_offset": 0, 00:17:10.170 "data_size": 65536 00:17:10.170 }, 00:17:10.170 { 00:17:10.170 "name": "BaseBdev3", 00:17:10.170 "uuid": "64bdc1e3-382e-5091-b4b0-929624105026", 00:17:10.170 "is_configured": true, 00:17:10.170 "data_offset": 0, 00:17:10.170 "data_size": 65536 00:17:10.170 } 00:17:10.170 ] 00:17:10.170 }' 00:17:10.170 21:44:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:10.170 21:44:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:10.170 21:44:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:10.171 21:44:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:10.171 21:44:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:17:10.171 21:44:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:17:10.171 21:44:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:17:10.171 21:44:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=562 00:17:10.171 21:44:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:10.171 21:44:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:10.171 21:44:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:10.171 21:44:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:10.171 21:44:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:10.171 21:44:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:10.171 21:44:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:10.171 21:44:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.171 21:44:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:10.171 21:44:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:10.171 21:44:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.171 21:44:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:10.171 "name": "raid_bdev1", 00:17:10.171 "uuid": "717bba7c-174a-4da5-bca2-6039de9d9193", 00:17:10.171 "strip_size_kb": 64, 00:17:10.171 "state": "online", 00:17:10.171 "raid_level": "raid5f", 00:17:10.171 "superblock": false, 00:17:10.171 "num_base_bdevs": 3, 00:17:10.171 "num_base_bdevs_discovered": 3, 00:17:10.171 "num_base_bdevs_operational": 3, 00:17:10.171 "process": { 00:17:10.171 "type": "rebuild", 00:17:10.171 "target": "spare", 00:17:10.171 "progress": { 00:17:10.171 "blocks": 22528, 00:17:10.171 "percent": 17 00:17:10.171 } 00:17:10.171 }, 00:17:10.171 "base_bdevs_list": [ 00:17:10.171 { 00:17:10.171 "name": "spare", 00:17:10.171 "uuid": "7a0fb4da-5ab1-5c6c-909e-3db660bceecf", 00:17:10.171 "is_configured": true, 00:17:10.171 "data_offset": 0, 00:17:10.171 "data_size": 65536 00:17:10.171 }, 00:17:10.171 { 00:17:10.171 "name": "BaseBdev2", 00:17:10.171 "uuid": "d8a06694-70d7-5201-b06e-3ead41842ff0", 00:17:10.171 "is_configured": true, 00:17:10.171 "data_offset": 0, 00:17:10.171 "data_size": 65536 00:17:10.171 }, 00:17:10.171 { 00:17:10.171 "name": "BaseBdev3", 00:17:10.171 "uuid": "64bdc1e3-382e-5091-b4b0-929624105026", 00:17:10.171 "is_configured": true, 00:17:10.171 "data_offset": 0, 00:17:10.171 "data_size": 65536 00:17:10.171 } 00:17:10.171 ] 00:17:10.171 }' 00:17:10.171 21:44:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:10.171 21:44:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:10.171 21:44:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:10.171 21:44:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:10.171 21:44:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:11.599 21:44:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:11.599 21:44:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:11.599 21:44:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:11.599 21:44:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:11.599 21:44:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:11.599 21:44:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:11.599 21:44:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:11.599 21:44:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:11.599 21:44:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.599 21:44:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:11.599 21:44:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.599 21:44:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:11.599 "name": "raid_bdev1", 00:17:11.599 "uuid": "717bba7c-174a-4da5-bca2-6039de9d9193", 00:17:11.599 "strip_size_kb": 64, 00:17:11.599 "state": "online", 00:17:11.599 "raid_level": "raid5f", 00:17:11.599 "superblock": false, 00:17:11.599 "num_base_bdevs": 3, 00:17:11.599 "num_base_bdevs_discovered": 3, 00:17:11.599 "num_base_bdevs_operational": 3, 00:17:11.599 "process": { 00:17:11.599 "type": "rebuild", 00:17:11.599 "target": "spare", 00:17:11.599 "progress": { 00:17:11.599 "blocks": 45056, 00:17:11.600 "percent": 34 00:17:11.600 } 00:17:11.600 }, 00:17:11.600 "base_bdevs_list": [ 00:17:11.600 { 00:17:11.600 "name": "spare", 00:17:11.600 "uuid": "7a0fb4da-5ab1-5c6c-909e-3db660bceecf", 00:17:11.600 "is_configured": true, 00:17:11.600 "data_offset": 0, 00:17:11.600 "data_size": 65536 00:17:11.600 }, 00:17:11.600 { 00:17:11.600 "name": "BaseBdev2", 00:17:11.600 "uuid": "d8a06694-70d7-5201-b06e-3ead41842ff0", 00:17:11.600 "is_configured": true, 00:17:11.600 "data_offset": 0, 00:17:11.600 "data_size": 65536 00:17:11.600 }, 00:17:11.600 { 00:17:11.600 "name": "BaseBdev3", 00:17:11.600 "uuid": "64bdc1e3-382e-5091-b4b0-929624105026", 00:17:11.600 "is_configured": true, 00:17:11.600 "data_offset": 0, 00:17:11.600 "data_size": 65536 00:17:11.600 } 00:17:11.600 ] 00:17:11.600 }' 00:17:11.600 21:44:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:11.600 21:44:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:11.600 21:44:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:11.600 21:44:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:11.600 21:44:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:12.538 21:44:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:12.538 21:44:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:12.538 21:44:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:12.538 21:44:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:12.538 21:44:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:12.538 21:44:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:12.538 21:44:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:12.538 21:44:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.538 21:44:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:12.538 21:44:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:12.538 21:44:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.538 21:44:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:12.538 "name": "raid_bdev1", 00:17:12.538 "uuid": "717bba7c-174a-4da5-bca2-6039de9d9193", 00:17:12.538 "strip_size_kb": 64, 00:17:12.538 "state": "online", 00:17:12.538 "raid_level": "raid5f", 00:17:12.538 "superblock": false, 00:17:12.538 "num_base_bdevs": 3, 00:17:12.538 "num_base_bdevs_discovered": 3, 00:17:12.538 "num_base_bdevs_operational": 3, 00:17:12.538 "process": { 00:17:12.538 "type": "rebuild", 00:17:12.538 "target": "spare", 00:17:12.538 "progress": { 00:17:12.538 "blocks": 69632, 00:17:12.538 "percent": 53 00:17:12.538 } 00:17:12.538 }, 00:17:12.538 "base_bdevs_list": [ 00:17:12.538 { 00:17:12.538 "name": "spare", 00:17:12.538 "uuid": "7a0fb4da-5ab1-5c6c-909e-3db660bceecf", 00:17:12.538 "is_configured": true, 00:17:12.538 "data_offset": 0, 00:17:12.538 "data_size": 65536 00:17:12.538 }, 00:17:12.538 { 00:17:12.538 "name": "BaseBdev2", 00:17:12.538 "uuid": "d8a06694-70d7-5201-b06e-3ead41842ff0", 00:17:12.538 "is_configured": true, 00:17:12.538 "data_offset": 0, 00:17:12.538 "data_size": 65536 00:17:12.538 }, 00:17:12.538 { 00:17:12.538 "name": "BaseBdev3", 00:17:12.538 "uuid": "64bdc1e3-382e-5091-b4b0-929624105026", 00:17:12.538 "is_configured": true, 00:17:12.538 "data_offset": 0, 00:17:12.538 "data_size": 65536 00:17:12.538 } 00:17:12.538 ] 00:17:12.538 }' 00:17:12.538 21:44:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:12.538 21:44:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:12.538 21:44:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:12.538 21:44:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:12.538 21:44:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:13.475 21:44:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:13.475 21:44:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:13.475 21:44:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:13.475 21:44:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:13.475 21:44:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:13.475 21:44:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:13.475 21:44:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:13.475 21:44:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.475 21:44:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:13.475 21:44:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:13.734 21:44:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.734 21:44:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:13.734 "name": "raid_bdev1", 00:17:13.734 "uuid": "717bba7c-174a-4da5-bca2-6039de9d9193", 00:17:13.734 "strip_size_kb": 64, 00:17:13.734 "state": "online", 00:17:13.734 "raid_level": "raid5f", 00:17:13.734 "superblock": false, 00:17:13.734 "num_base_bdevs": 3, 00:17:13.734 "num_base_bdevs_discovered": 3, 00:17:13.734 "num_base_bdevs_operational": 3, 00:17:13.734 "process": { 00:17:13.734 "type": "rebuild", 00:17:13.734 "target": "spare", 00:17:13.734 "progress": { 00:17:13.734 "blocks": 92160, 00:17:13.734 "percent": 70 00:17:13.734 } 00:17:13.734 }, 00:17:13.734 "base_bdevs_list": [ 00:17:13.734 { 00:17:13.734 "name": "spare", 00:17:13.734 "uuid": "7a0fb4da-5ab1-5c6c-909e-3db660bceecf", 00:17:13.734 "is_configured": true, 00:17:13.734 "data_offset": 0, 00:17:13.734 "data_size": 65536 00:17:13.734 }, 00:17:13.734 { 00:17:13.734 "name": "BaseBdev2", 00:17:13.734 "uuid": "d8a06694-70d7-5201-b06e-3ead41842ff0", 00:17:13.734 "is_configured": true, 00:17:13.734 "data_offset": 0, 00:17:13.734 "data_size": 65536 00:17:13.734 }, 00:17:13.734 { 00:17:13.734 "name": "BaseBdev3", 00:17:13.734 "uuid": "64bdc1e3-382e-5091-b4b0-929624105026", 00:17:13.734 "is_configured": true, 00:17:13.734 "data_offset": 0, 00:17:13.734 "data_size": 65536 00:17:13.734 } 00:17:13.734 ] 00:17:13.734 }' 00:17:13.734 21:44:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:13.734 21:44:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:13.734 21:44:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:13.734 21:44:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:13.734 21:44:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:14.670 21:44:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:14.670 21:44:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:14.670 21:44:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:14.670 21:44:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:14.670 21:44:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:14.670 21:44:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:14.670 21:44:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:14.670 21:44:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:14.670 21:44:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.670 21:44:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:14.670 21:44:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.929 21:44:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:14.929 "name": "raid_bdev1", 00:17:14.929 "uuid": "717bba7c-174a-4da5-bca2-6039de9d9193", 00:17:14.929 "strip_size_kb": 64, 00:17:14.929 "state": "online", 00:17:14.929 "raid_level": "raid5f", 00:17:14.929 "superblock": false, 00:17:14.929 "num_base_bdevs": 3, 00:17:14.929 "num_base_bdevs_discovered": 3, 00:17:14.929 "num_base_bdevs_operational": 3, 00:17:14.929 "process": { 00:17:14.929 "type": "rebuild", 00:17:14.929 "target": "spare", 00:17:14.929 "progress": { 00:17:14.929 "blocks": 116736, 00:17:14.929 "percent": 89 00:17:14.929 } 00:17:14.929 }, 00:17:14.929 "base_bdevs_list": [ 00:17:14.929 { 00:17:14.929 "name": "spare", 00:17:14.929 "uuid": "7a0fb4da-5ab1-5c6c-909e-3db660bceecf", 00:17:14.929 "is_configured": true, 00:17:14.929 "data_offset": 0, 00:17:14.929 "data_size": 65536 00:17:14.929 }, 00:17:14.929 { 00:17:14.929 "name": "BaseBdev2", 00:17:14.929 "uuid": "d8a06694-70d7-5201-b06e-3ead41842ff0", 00:17:14.929 "is_configured": true, 00:17:14.929 "data_offset": 0, 00:17:14.929 "data_size": 65536 00:17:14.929 }, 00:17:14.929 { 00:17:14.929 "name": "BaseBdev3", 00:17:14.929 "uuid": "64bdc1e3-382e-5091-b4b0-929624105026", 00:17:14.929 "is_configured": true, 00:17:14.929 "data_offset": 0, 00:17:14.929 "data_size": 65536 00:17:14.929 } 00:17:14.929 ] 00:17:14.929 }' 00:17:14.929 21:44:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:14.929 21:44:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:14.929 21:44:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:14.929 21:44:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:14.929 21:44:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:15.498 [2024-12-10 21:44:16.118430] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:17:15.498 [2024-12-10 21:44:16.118616] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:17:15.498 [2024-12-10 21:44:16.118706] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:16.066 21:44:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:16.066 21:44:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:16.066 21:44:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:16.066 21:44:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:16.066 21:44:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:16.066 21:44:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:16.066 21:44:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:16.066 21:44:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:16.066 21:44:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.067 21:44:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:16.067 21:44:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.067 21:44:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:16.067 "name": "raid_bdev1", 00:17:16.067 "uuid": "717bba7c-174a-4da5-bca2-6039de9d9193", 00:17:16.067 "strip_size_kb": 64, 00:17:16.067 "state": "online", 00:17:16.067 "raid_level": "raid5f", 00:17:16.067 "superblock": false, 00:17:16.067 "num_base_bdevs": 3, 00:17:16.067 "num_base_bdevs_discovered": 3, 00:17:16.067 "num_base_bdevs_operational": 3, 00:17:16.067 "base_bdevs_list": [ 00:17:16.067 { 00:17:16.067 "name": "spare", 00:17:16.067 "uuid": "7a0fb4da-5ab1-5c6c-909e-3db660bceecf", 00:17:16.067 "is_configured": true, 00:17:16.067 "data_offset": 0, 00:17:16.067 "data_size": 65536 00:17:16.067 }, 00:17:16.067 { 00:17:16.067 "name": "BaseBdev2", 00:17:16.067 "uuid": "d8a06694-70d7-5201-b06e-3ead41842ff0", 00:17:16.067 "is_configured": true, 00:17:16.067 "data_offset": 0, 00:17:16.067 "data_size": 65536 00:17:16.067 }, 00:17:16.067 { 00:17:16.067 "name": "BaseBdev3", 00:17:16.067 "uuid": "64bdc1e3-382e-5091-b4b0-929624105026", 00:17:16.067 "is_configured": true, 00:17:16.067 "data_offset": 0, 00:17:16.067 "data_size": 65536 00:17:16.067 } 00:17:16.067 ] 00:17:16.067 }' 00:17:16.067 21:44:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:16.067 21:44:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:17:16.067 21:44:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:16.067 21:44:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:17:16.067 21:44:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:17:16.067 21:44:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:16.067 21:44:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:16.067 21:44:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:16.067 21:44:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:16.067 21:44:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:16.067 21:44:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:16.067 21:44:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:16.067 21:44:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.067 21:44:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:16.067 21:44:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.067 21:44:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:16.067 "name": "raid_bdev1", 00:17:16.067 "uuid": "717bba7c-174a-4da5-bca2-6039de9d9193", 00:17:16.067 "strip_size_kb": 64, 00:17:16.067 "state": "online", 00:17:16.067 "raid_level": "raid5f", 00:17:16.067 "superblock": false, 00:17:16.067 "num_base_bdevs": 3, 00:17:16.067 "num_base_bdevs_discovered": 3, 00:17:16.067 "num_base_bdevs_operational": 3, 00:17:16.067 "base_bdevs_list": [ 00:17:16.067 { 00:17:16.067 "name": "spare", 00:17:16.067 "uuid": "7a0fb4da-5ab1-5c6c-909e-3db660bceecf", 00:17:16.067 "is_configured": true, 00:17:16.067 "data_offset": 0, 00:17:16.067 "data_size": 65536 00:17:16.067 }, 00:17:16.067 { 00:17:16.067 "name": "BaseBdev2", 00:17:16.067 "uuid": "d8a06694-70d7-5201-b06e-3ead41842ff0", 00:17:16.067 "is_configured": true, 00:17:16.067 "data_offset": 0, 00:17:16.067 "data_size": 65536 00:17:16.067 }, 00:17:16.067 { 00:17:16.067 "name": "BaseBdev3", 00:17:16.067 "uuid": "64bdc1e3-382e-5091-b4b0-929624105026", 00:17:16.067 "is_configured": true, 00:17:16.067 "data_offset": 0, 00:17:16.067 "data_size": 65536 00:17:16.067 } 00:17:16.067 ] 00:17:16.067 }' 00:17:16.067 21:44:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:16.067 21:44:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:16.067 21:44:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:16.067 21:44:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:16.326 21:44:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:16.327 21:44:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:16.327 21:44:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:16.327 21:44:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:16.327 21:44:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:16.327 21:44:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:16.327 21:44:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:16.327 21:44:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:16.327 21:44:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:16.327 21:44:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:16.327 21:44:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:16.327 21:44:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:16.327 21:44:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.327 21:44:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:16.327 21:44:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.327 21:44:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:16.327 "name": "raid_bdev1", 00:17:16.327 "uuid": "717bba7c-174a-4da5-bca2-6039de9d9193", 00:17:16.327 "strip_size_kb": 64, 00:17:16.327 "state": "online", 00:17:16.327 "raid_level": "raid5f", 00:17:16.327 "superblock": false, 00:17:16.327 "num_base_bdevs": 3, 00:17:16.327 "num_base_bdevs_discovered": 3, 00:17:16.327 "num_base_bdevs_operational": 3, 00:17:16.327 "base_bdevs_list": [ 00:17:16.327 { 00:17:16.327 "name": "spare", 00:17:16.327 "uuid": "7a0fb4da-5ab1-5c6c-909e-3db660bceecf", 00:17:16.327 "is_configured": true, 00:17:16.327 "data_offset": 0, 00:17:16.327 "data_size": 65536 00:17:16.327 }, 00:17:16.327 { 00:17:16.327 "name": "BaseBdev2", 00:17:16.327 "uuid": "d8a06694-70d7-5201-b06e-3ead41842ff0", 00:17:16.327 "is_configured": true, 00:17:16.327 "data_offset": 0, 00:17:16.327 "data_size": 65536 00:17:16.327 }, 00:17:16.327 { 00:17:16.327 "name": "BaseBdev3", 00:17:16.327 "uuid": "64bdc1e3-382e-5091-b4b0-929624105026", 00:17:16.327 "is_configured": true, 00:17:16.327 "data_offset": 0, 00:17:16.327 "data_size": 65536 00:17:16.327 } 00:17:16.327 ] 00:17:16.327 }' 00:17:16.327 21:44:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:16.327 21:44:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:16.592 21:44:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:16.592 21:44:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.592 21:44:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:16.592 [2024-12-10 21:44:17.298765] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:16.592 [2024-12-10 21:44:17.298801] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:16.592 [2024-12-10 21:44:17.298900] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:16.592 [2024-12-10 21:44:17.298994] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:16.592 [2024-12-10 21:44:17.299011] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:16.592 21:44:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.592 21:44:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:16.592 21:44:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.592 21:44:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:17:16.592 21:44:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:16.592 21:44:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.592 21:44:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:17:16.592 21:44:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:17:16.592 21:44:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:17:16.592 21:44:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:17:16.592 21:44:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:16.592 21:44:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:17:16.592 21:44:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:16.592 21:44:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:16.592 21:44:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:16.592 21:44:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:17:16.592 21:44:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:16.592 21:44:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:16.592 21:44:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:17:16.860 /dev/nbd0 00:17:16.860 21:44:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:16.860 21:44:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:16.860 21:44:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:16.860 21:44:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:17:16.860 21:44:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:16.860 21:44:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:16.860 21:44:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:16.860 21:44:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:17:16.860 21:44:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:16.860 21:44:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:16.860 21:44:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:16.860 1+0 records in 00:17:16.860 1+0 records out 00:17:16.860 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00035397 s, 11.6 MB/s 00:17:16.860 21:44:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:16.860 21:44:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:17:16.860 21:44:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:16.860 21:44:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:16.860 21:44:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:17:16.860 21:44:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:16.860 21:44:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:16.860 21:44:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:17:17.119 /dev/nbd1 00:17:17.119 21:44:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:17.119 21:44:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:17.119 21:44:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:17:17.119 21:44:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:17:17.119 21:44:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:17.119 21:44:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:17.119 21:44:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:17:17.119 21:44:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:17:17.119 21:44:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:17.119 21:44:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:17.119 21:44:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:17.119 1+0 records in 00:17:17.119 1+0 records out 00:17:17.119 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000244261 s, 16.8 MB/s 00:17:17.119 21:44:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:17.119 21:44:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:17:17.119 21:44:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:17.119 21:44:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:17.119 21:44:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:17:17.119 21:44:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:17.119 21:44:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:17.119 21:44:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:17:17.378 21:44:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:17:17.378 21:44:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:17.378 21:44:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:17.378 21:44:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:17.378 21:44:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:17:17.378 21:44:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:17.378 21:44:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:17.637 21:44:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:17.637 21:44:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:17.637 21:44:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:17.637 21:44:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:17.637 21:44:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:17.637 21:44:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:17.637 21:44:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:17:17.637 21:44:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:17:17.637 21:44:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:17.637 21:44:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:17:17.897 21:44:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:17.897 21:44:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:17.897 21:44:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:17.897 21:44:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:17.897 21:44:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:17.897 21:44:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:17.897 21:44:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:17:17.897 21:44:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:17:17.897 21:44:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:17:17.897 21:44:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 81753 00:17:17.897 21:44:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 81753 ']' 00:17:17.897 21:44:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 81753 00:17:17.897 21:44:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:17:17.897 21:44:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:17.897 21:44:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81753 00:17:17.897 21:44:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:17.897 21:44:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:17.897 killing process with pid 81753 00:17:17.897 21:44:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81753' 00:17:17.897 21:44:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@973 -- # kill 81753 00:17:17.897 Received shutdown signal, test time was about 60.000000 seconds 00:17:17.897 00:17:17.897 Latency(us) 00:17:17.897 [2024-12-10T21:44:18.680Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:17.897 [2024-12-10T21:44:18.680Z] =================================================================================================================== 00:17:17.897 [2024-12-10T21:44:18.680Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:17.897 [2024-12-10 21:44:18.541717] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:17.897 21:44:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@978 -- # wait 81753 00:17:18.467 [2024-12-10 21:44:18.990754] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:19.852 21:44:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:17:19.852 00:17:19.852 real 0m15.737s 00:17:19.852 user 0m19.232s 00:17:19.852 sys 0m2.055s 00:17:19.852 21:44:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:19.852 21:44:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:19.852 ************************************ 00:17:19.852 END TEST raid5f_rebuild_test 00:17:19.852 ************************************ 00:17:19.852 21:44:20 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 3 true false true 00:17:19.852 21:44:20 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:17:19.852 21:44:20 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:19.852 21:44:20 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:19.852 ************************************ 00:17:19.852 START TEST raid5f_rebuild_test_sb 00:17:19.852 ************************************ 00:17:19.852 21:44:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 3 true false true 00:17:19.852 21:44:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:17:19.852 21:44:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:17:19.852 21:44:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:17:19.852 21:44:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:17:19.852 21:44:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:17:19.852 21:44:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:17:19.852 21:44:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:19.852 21:44:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:17:19.853 21:44:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:19.853 21:44:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:19.853 21:44:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:17:19.853 21:44:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:19.853 21:44:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:19.853 21:44:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:17:19.853 21:44:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:19.853 21:44:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:19.853 21:44:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:17:19.853 21:44:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:17:19.853 21:44:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:17:19.853 21:44:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:17:19.853 21:44:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:17:19.853 21:44:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:17:19.853 21:44:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:17:19.853 21:44:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:17:19.853 21:44:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:17:19.853 21:44:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:17:19.853 21:44:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:17:19.853 21:44:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:17:19.853 21:44:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:17:19.853 21:44:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=82208 00:17:19.853 21:44:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:17:19.853 21:44:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 82208 00:17:19.853 21:44:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 82208 ']' 00:17:19.853 21:44:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:19.853 21:44:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:19.853 21:44:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:19.853 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:19.853 21:44:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:19.853 21:44:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:19.853 [2024-12-10 21:44:20.533823] Starting SPDK v25.01-pre git sha1 cec5ba284 / DPDK 24.03.0 initialization... 00:17:19.853 [2024-12-10 21:44:20.534062] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:17:19.853 Zero copy mechanism will not be used. 00:17:19.853 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82208 ] 00:17:20.113 [2024-12-10 21:44:20.714718] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:20.113 [2024-12-10 21:44:20.851414] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:17:20.372 [2024-12-10 21:44:21.091165] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:20.372 [2024-12-10 21:44:21.091321] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:20.941 21:44:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:20.941 21:44:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:17:20.941 21:44:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:20.941 21:44:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:17:20.941 21:44:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.941 21:44:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:20.941 BaseBdev1_malloc 00:17:20.941 21:44:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.941 21:44:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:20.941 21:44:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.941 21:44:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:20.941 [2024-12-10 21:44:21.546780] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:20.941 [2024-12-10 21:44:21.546848] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:20.941 [2024-12-10 21:44:21.546874] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:20.941 [2024-12-10 21:44:21.546889] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:20.941 [2024-12-10 21:44:21.549339] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:20.941 [2024-12-10 21:44:21.549380] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:20.941 BaseBdev1 00:17:20.941 21:44:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.941 21:44:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:20.941 21:44:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:17:20.941 21:44:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.941 21:44:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:20.941 BaseBdev2_malloc 00:17:20.941 21:44:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.941 21:44:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:17:20.941 21:44:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.941 21:44:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:20.941 [2024-12-10 21:44:21.609513] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:17:20.941 [2024-12-10 21:44:21.609579] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:20.941 [2024-12-10 21:44:21.609600] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:20.941 [2024-12-10 21:44:21.609617] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:20.941 [2024-12-10 21:44:21.612011] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:20.941 [2024-12-10 21:44:21.612050] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:20.941 BaseBdev2 00:17:20.941 21:44:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.941 21:44:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:20.941 21:44:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:17:20.941 21:44:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.941 21:44:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:20.941 BaseBdev3_malloc 00:17:20.941 21:44:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.941 21:44:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:17:20.941 21:44:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.941 21:44:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:20.941 [2024-12-10 21:44:21.683972] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:17:20.941 [2024-12-10 21:44:21.684037] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:20.941 [2024-12-10 21:44:21.684061] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:17:20.941 [2024-12-10 21:44:21.684075] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:20.941 [2024-12-10 21:44:21.686488] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:20.941 [2024-12-10 21:44:21.686532] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:17:20.941 BaseBdev3 00:17:20.941 21:44:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.941 21:44:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:17:20.941 21:44:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.941 21:44:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:21.202 spare_malloc 00:17:21.202 21:44:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.202 21:44:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:17:21.202 21:44:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.202 21:44:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:21.202 spare_delay 00:17:21.202 21:44:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.202 21:44:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:21.202 21:44:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.202 21:44:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:21.202 [2024-12-10 21:44:21.757944] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:21.202 [2024-12-10 21:44:21.758002] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:21.202 [2024-12-10 21:44:21.758024] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:17:21.202 [2024-12-10 21:44:21.758037] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:21.202 [2024-12-10 21:44:21.760478] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:21.202 [2024-12-10 21:44:21.760520] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:21.202 spare 00:17:21.202 21:44:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.202 21:44:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:17:21.202 21:44:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.202 21:44:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:21.202 [2024-12-10 21:44:21.770007] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:21.202 [2024-12-10 21:44:21.772075] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:21.202 [2024-12-10 21:44:21.772155] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:21.202 [2024-12-10 21:44:21.772370] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:21.202 [2024-12-10 21:44:21.772395] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:17:21.202 [2024-12-10 21:44:21.772690] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:17:21.202 [2024-12-10 21:44:21.779460] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:21.202 [2024-12-10 21:44:21.779495] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:21.202 [2024-12-10 21:44:21.779726] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:21.202 21:44:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.202 21:44:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:21.202 21:44:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:21.202 21:44:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:21.202 21:44:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:21.202 21:44:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:21.202 21:44:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:21.202 21:44:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:21.202 21:44:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:21.202 21:44:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:21.202 21:44:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:21.202 21:44:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:21.202 21:44:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:21.202 21:44:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.202 21:44:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:21.202 21:44:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.202 21:44:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:21.202 "name": "raid_bdev1", 00:17:21.202 "uuid": "5f3bd32d-8b42-4b5b-b630-d5144c5e20cd", 00:17:21.202 "strip_size_kb": 64, 00:17:21.202 "state": "online", 00:17:21.202 "raid_level": "raid5f", 00:17:21.202 "superblock": true, 00:17:21.202 "num_base_bdevs": 3, 00:17:21.202 "num_base_bdevs_discovered": 3, 00:17:21.202 "num_base_bdevs_operational": 3, 00:17:21.202 "base_bdevs_list": [ 00:17:21.202 { 00:17:21.202 "name": "BaseBdev1", 00:17:21.202 "uuid": "01ba2c2b-a30c-5b86-9da1-889b78bb6f65", 00:17:21.202 "is_configured": true, 00:17:21.202 "data_offset": 2048, 00:17:21.202 "data_size": 63488 00:17:21.202 }, 00:17:21.202 { 00:17:21.202 "name": "BaseBdev2", 00:17:21.202 "uuid": "60932cd2-b475-5e57-bd83-c7caa0ea26ab", 00:17:21.202 "is_configured": true, 00:17:21.202 "data_offset": 2048, 00:17:21.202 "data_size": 63488 00:17:21.202 }, 00:17:21.202 { 00:17:21.202 "name": "BaseBdev3", 00:17:21.202 "uuid": "48de6bb9-4b1c-518d-a551-3b8a3ba1e969", 00:17:21.202 "is_configured": true, 00:17:21.202 "data_offset": 2048, 00:17:21.203 "data_size": 63488 00:17:21.203 } 00:17:21.203 ] 00:17:21.203 }' 00:17:21.203 21:44:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:21.203 21:44:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:21.462 21:44:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:21.462 21:44:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.462 21:44:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:21.462 21:44:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:17:21.462 [2024-12-10 21:44:22.174761] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:21.462 21:44:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.462 21:44:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=126976 00:17:21.462 21:44:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:21.462 21:44:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:17:21.462 21:44:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.462 21:44:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:21.462 21:44:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.722 21:44:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:17:21.722 21:44:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:17:21.722 21:44:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:17:21.722 21:44:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:17:21.722 21:44:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:17:21.722 21:44:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:21.722 21:44:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:17:21.722 21:44:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:21.722 21:44:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:17:21.722 21:44:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:21.723 21:44:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:17:21.723 21:44:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:21.723 21:44:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:21.723 21:44:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:17:21.723 [2024-12-10 21:44:22.462128] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:17:21.723 /dev/nbd0 00:17:21.984 21:44:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:21.984 21:44:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:21.984 21:44:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:21.984 21:44:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:17:21.984 21:44:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:21.984 21:44:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:21.984 21:44:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:21.984 21:44:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:17:21.984 21:44:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:21.984 21:44:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:21.984 21:44:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:21.984 1+0 records in 00:17:21.984 1+0 records out 00:17:21.984 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000404667 s, 10.1 MB/s 00:17:21.984 21:44:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:21.984 21:44:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:17:21.984 21:44:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:21.984 21:44:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:21.984 21:44:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:17:21.984 21:44:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:21.984 21:44:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:21.984 21:44:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:17:21.984 21:44:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:17:21.984 21:44:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 128 00:17:21.984 21:44:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=496 oflag=direct 00:17:22.243 496+0 records in 00:17:22.243 496+0 records out 00:17:22.243 65011712 bytes (65 MB, 62 MiB) copied, 0.435704 s, 149 MB/s 00:17:22.243 21:44:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:17:22.243 21:44:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:22.243 21:44:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:22.243 21:44:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:22.243 21:44:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:17:22.243 21:44:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:22.243 21:44:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:22.503 21:44:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:22.503 [2024-12-10 21:44:23.201742] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:22.503 21:44:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:22.503 21:44:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:22.503 21:44:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:22.503 21:44:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:22.503 21:44:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:22.503 21:44:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:17:22.503 21:44:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:17:22.503 21:44:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:17:22.503 21:44:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.503 21:44:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:22.503 [2024-12-10 21:44:23.218856] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:22.503 21:44:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.503 21:44:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:17:22.503 21:44:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:22.503 21:44:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:22.503 21:44:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:22.503 21:44:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:22.503 21:44:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:22.503 21:44:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:22.503 21:44:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:22.503 21:44:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:22.503 21:44:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:22.503 21:44:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:22.503 21:44:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:22.503 21:44:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.503 21:44:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:22.503 21:44:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.503 21:44:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:22.503 "name": "raid_bdev1", 00:17:22.503 "uuid": "5f3bd32d-8b42-4b5b-b630-d5144c5e20cd", 00:17:22.503 "strip_size_kb": 64, 00:17:22.503 "state": "online", 00:17:22.503 "raid_level": "raid5f", 00:17:22.503 "superblock": true, 00:17:22.503 "num_base_bdevs": 3, 00:17:22.503 "num_base_bdevs_discovered": 2, 00:17:22.503 "num_base_bdevs_operational": 2, 00:17:22.503 "base_bdevs_list": [ 00:17:22.503 { 00:17:22.503 "name": null, 00:17:22.503 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:22.503 "is_configured": false, 00:17:22.503 "data_offset": 0, 00:17:22.503 "data_size": 63488 00:17:22.503 }, 00:17:22.503 { 00:17:22.503 "name": "BaseBdev2", 00:17:22.503 "uuid": "60932cd2-b475-5e57-bd83-c7caa0ea26ab", 00:17:22.503 "is_configured": true, 00:17:22.503 "data_offset": 2048, 00:17:22.503 "data_size": 63488 00:17:22.503 }, 00:17:22.503 { 00:17:22.503 "name": "BaseBdev3", 00:17:22.503 "uuid": "48de6bb9-4b1c-518d-a551-3b8a3ba1e969", 00:17:22.503 "is_configured": true, 00:17:22.503 "data_offset": 2048, 00:17:22.503 "data_size": 63488 00:17:22.503 } 00:17:22.503 ] 00:17:22.503 }' 00:17:22.503 21:44:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:22.503 21:44:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:23.094 21:44:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:23.094 21:44:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.094 21:44:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:23.094 [2024-12-10 21:44:23.710080] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:23.094 [2024-12-10 21:44:23.729106] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000028f80 00:17:23.094 21:44:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.094 21:44:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:17:23.094 [2024-12-10 21:44:23.737374] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:24.033 21:44:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:24.033 21:44:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:24.033 21:44:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:24.033 21:44:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:24.033 21:44:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:24.033 21:44:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:24.033 21:44:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:24.033 21:44:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.033 21:44:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:24.033 21:44:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.033 21:44:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:24.033 "name": "raid_bdev1", 00:17:24.033 "uuid": "5f3bd32d-8b42-4b5b-b630-d5144c5e20cd", 00:17:24.033 "strip_size_kb": 64, 00:17:24.033 "state": "online", 00:17:24.033 "raid_level": "raid5f", 00:17:24.033 "superblock": true, 00:17:24.033 "num_base_bdevs": 3, 00:17:24.033 "num_base_bdevs_discovered": 3, 00:17:24.033 "num_base_bdevs_operational": 3, 00:17:24.033 "process": { 00:17:24.033 "type": "rebuild", 00:17:24.033 "target": "spare", 00:17:24.033 "progress": { 00:17:24.033 "blocks": 20480, 00:17:24.033 "percent": 16 00:17:24.033 } 00:17:24.033 }, 00:17:24.033 "base_bdevs_list": [ 00:17:24.033 { 00:17:24.033 "name": "spare", 00:17:24.033 "uuid": "e0b1683f-8981-5c05-9bb5-5c165e2d53c5", 00:17:24.033 "is_configured": true, 00:17:24.033 "data_offset": 2048, 00:17:24.033 "data_size": 63488 00:17:24.033 }, 00:17:24.033 { 00:17:24.033 "name": "BaseBdev2", 00:17:24.033 "uuid": "60932cd2-b475-5e57-bd83-c7caa0ea26ab", 00:17:24.033 "is_configured": true, 00:17:24.033 "data_offset": 2048, 00:17:24.033 "data_size": 63488 00:17:24.033 }, 00:17:24.033 { 00:17:24.033 "name": "BaseBdev3", 00:17:24.033 "uuid": "48de6bb9-4b1c-518d-a551-3b8a3ba1e969", 00:17:24.033 "is_configured": true, 00:17:24.033 "data_offset": 2048, 00:17:24.033 "data_size": 63488 00:17:24.033 } 00:17:24.033 ] 00:17:24.033 }' 00:17:24.033 21:44:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:24.294 21:44:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:24.294 21:44:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:24.294 21:44:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:24.294 21:44:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:24.294 21:44:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.294 21:44:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:24.294 [2024-12-10 21:44:24.892613] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:24.294 [2024-12-10 21:44:24.947706] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:24.294 [2024-12-10 21:44:24.947776] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:24.294 [2024-12-10 21:44:24.947794] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:24.294 [2024-12-10 21:44:24.947801] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:24.294 21:44:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.294 21:44:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:17:24.294 21:44:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:24.294 21:44:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:24.294 21:44:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:24.294 21:44:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:24.294 21:44:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:24.294 21:44:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:24.294 21:44:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:24.294 21:44:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:24.294 21:44:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:24.294 21:44:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:24.294 21:44:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:24.294 21:44:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.294 21:44:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:24.294 21:44:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.294 21:44:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:24.294 "name": "raid_bdev1", 00:17:24.294 "uuid": "5f3bd32d-8b42-4b5b-b630-d5144c5e20cd", 00:17:24.294 "strip_size_kb": 64, 00:17:24.294 "state": "online", 00:17:24.294 "raid_level": "raid5f", 00:17:24.294 "superblock": true, 00:17:24.294 "num_base_bdevs": 3, 00:17:24.294 "num_base_bdevs_discovered": 2, 00:17:24.294 "num_base_bdevs_operational": 2, 00:17:24.294 "base_bdevs_list": [ 00:17:24.294 { 00:17:24.294 "name": null, 00:17:24.294 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:24.294 "is_configured": false, 00:17:24.294 "data_offset": 0, 00:17:24.294 "data_size": 63488 00:17:24.294 }, 00:17:24.294 { 00:17:24.294 "name": "BaseBdev2", 00:17:24.294 "uuid": "60932cd2-b475-5e57-bd83-c7caa0ea26ab", 00:17:24.294 "is_configured": true, 00:17:24.294 "data_offset": 2048, 00:17:24.294 "data_size": 63488 00:17:24.294 }, 00:17:24.294 { 00:17:24.294 "name": "BaseBdev3", 00:17:24.294 "uuid": "48de6bb9-4b1c-518d-a551-3b8a3ba1e969", 00:17:24.294 "is_configured": true, 00:17:24.294 "data_offset": 2048, 00:17:24.294 "data_size": 63488 00:17:24.294 } 00:17:24.294 ] 00:17:24.294 }' 00:17:24.294 21:44:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:24.294 21:44:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:24.864 21:44:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:24.864 21:44:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:24.864 21:44:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:24.864 21:44:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:24.864 21:44:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:24.864 21:44:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:24.864 21:44:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.864 21:44:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:24.864 21:44:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:24.864 21:44:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.864 21:44:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:24.864 "name": "raid_bdev1", 00:17:24.864 "uuid": "5f3bd32d-8b42-4b5b-b630-d5144c5e20cd", 00:17:24.864 "strip_size_kb": 64, 00:17:24.864 "state": "online", 00:17:24.864 "raid_level": "raid5f", 00:17:24.864 "superblock": true, 00:17:24.864 "num_base_bdevs": 3, 00:17:24.864 "num_base_bdevs_discovered": 2, 00:17:24.864 "num_base_bdevs_operational": 2, 00:17:24.864 "base_bdevs_list": [ 00:17:24.864 { 00:17:24.864 "name": null, 00:17:24.864 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:24.864 "is_configured": false, 00:17:24.864 "data_offset": 0, 00:17:24.864 "data_size": 63488 00:17:24.864 }, 00:17:24.864 { 00:17:24.864 "name": "BaseBdev2", 00:17:24.864 "uuid": "60932cd2-b475-5e57-bd83-c7caa0ea26ab", 00:17:24.864 "is_configured": true, 00:17:24.864 "data_offset": 2048, 00:17:24.864 "data_size": 63488 00:17:24.864 }, 00:17:24.864 { 00:17:24.864 "name": "BaseBdev3", 00:17:24.864 "uuid": "48de6bb9-4b1c-518d-a551-3b8a3ba1e969", 00:17:24.864 "is_configured": true, 00:17:24.864 "data_offset": 2048, 00:17:24.864 "data_size": 63488 00:17:24.864 } 00:17:24.864 ] 00:17:24.864 }' 00:17:24.864 21:44:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:24.864 21:44:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:24.864 21:44:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:24.864 21:44:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:24.864 21:44:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:24.864 21:44:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.864 21:44:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:24.864 [2024-12-10 21:44:25.515080] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:24.864 [2024-12-10 21:44:25.531010] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000029050 00:17:24.864 21:44:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.864 21:44:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:17:24.864 [2024-12-10 21:44:25.538799] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:25.802 21:44:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:25.802 21:44:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:25.802 21:44:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:25.802 21:44:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:25.802 21:44:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:25.802 21:44:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:25.802 21:44:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:25.802 21:44:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.802 21:44:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:25.802 21:44:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.062 21:44:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:26.062 "name": "raid_bdev1", 00:17:26.062 "uuid": "5f3bd32d-8b42-4b5b-b630-d5144c5e20cd", 00:17:26.062 "strip_size_kb": 64, 00:17:26.062 "state": "online", 00:17:26.062 "raid_level": "raid5f", 00:17:26.062 "superblock": true, 00:17:26.062 "num_base_bdevs": 3, 00:17:26.062 "num_base_bdevs_discovered": 3, 00:17:26.062 "num_base_bdevs_operational": 3, 00:17:26.062 "process": { 00:17:26.062 "type": "rebuild", 00:17:26.062 "target": "spare", 00:17:26.062 "progress": { 00:17:26.062 "blocks": 20480, 00:17:26.062 "percent": 16 00:17:26.062 } 00:17:26.062 }, 00:17:26.062 "base_bdevs_list": [ 00:17:26.062 { 00:17:26.062 "name": "spare", 00:17:26.062 "uuid": "e0b1683f-8981-5c05-9bb5-5c165e2d53c5", 00:17:26.062 "is_configured": true, 00:17:26.062 "data_offset": 2048, 00:17:26.062 "data_size": 63488 00:17:26.063 }, 00:17:26.063 { 00:17:26.063 "name": "BaseBdev2", 00:17:26.063 "uuid": "60932cd2-b475-5e57-bd83-c7caa0ea26ab", 00:17:26.063 "is_configured": true, 00:17:26.063 "data_offset": 2048, 00:17:26.063 "data_size": 63488 00:17:26.063 }, 00:17:26.063 { 00:17:26.063 "name": "BaseBdev3", 00:17:26.063 "uuid": "48de6bb9-4b1c-518d-a551-3b8a3ba1e969", 00:17:26.063 "is_configured": true, 00:17:26.063 "data_offset": 2048, 00:17:26.063 "data_size": 63488 00:17:26.063 } 00:17:26.063 ] 00:17:26.063 }' 00:17:26.063 21:44:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:26.063 21:44:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:26.063 21:44:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:26.063 21:44:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:26.063 21:44:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:17:26.063 21:44:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:17:26.063 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:17:26.063 21:44:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:17:26.063 21:44:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:17:26.063 21:44:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=578 00:17:26.063 21:44:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:26.063 21:44:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:26.063 21:44:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:26.063 21:44:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:26.063 21:44:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:26.063 21:44:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:26.063 21:44:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:26.063 21:44:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.063 21:44:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:26.063 21:44:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:26.063 21:44:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.063 21:44:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:26.063 "name": "raid_bdev1", 00:17:26.063 "uuid": "5f3bd32d-8b42-4b5b-b630-d5144c5e20cd", 00:17:26.063 "strip_size_kb": 64, 00:17:26.063 "state": "online", 00:17:26.063 "raid_level": "raid5f", 00:17:26.063 "superblock": true, 00:17:26.063 "num_base_bdevs": 3, 00:17:26.063 "num_base_bdevs_discovered": 3, 00:17:26.063 "num_base_bdevs_operational": 3, 00:17:26.063 "process": { 00:17:26.063 "type": "rebuild", 00:17:26.063 "target": "spare", 00:17:26.063 "progress": { 00:17:26.063 "blocks": 22528, 00:17:26.063 "percent": 17 00:17:26.063 } 00:17:26.063 }, 00:17:26.063 "base_bdevs_list": [ 00:17:26.063 { 00:17:26.063 "name": "spare", 00:17:26.063 "uuid": "e0b1683f-8981-5c05-9bb5-5c165e2d53c5", 00:17:26.063 "is_configured": true, 00:17:26.063 "data_offset": 2048, 00:17:26.063 "data_size": 63488 00:17:26.063 }, 00:17:26.063 { 00:17:26.063 "name": "BaseBdev2", 00:17:26.063 "uuid": "60932cd2-b475-5e57-bd83-c7caa0ea26ab", 00:17:26.063 "is_configured": true, 00:17:26.063 "data_offset": 2048, 00:17:26.063 "data_size": 63488 00:17:26.063 }, 00:17:26.063 { 00:17:26.063 "name": "BaseBdev3", 00:17:26.063 "uuid": "48de6bb9-4b1c-518d-a551-3b8a3ba1e969", 00:17:26.063 "is_configured": true, 00:17:26.063 "data_offset": 2048, 00:17:26.063 "data_size": 63488 00:17:26.063 } 00:17:26.063 ] 00:17:26.063 }' 00:17:26.063 21:44:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:26.063 21:44:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:26.063 21:44:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:26.063 21:44:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:26.063 21:44:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:27.443 21:44:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:27.444 21:44:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:27.444 21:44:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:27.444 21:44:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:27.444 21:44:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:27.444 21:44:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:27.444 21:44:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:27.444 21:44:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:27.444 21:44:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.444 21:44:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:27.444 21:44:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.444 21:44:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:27.444 "name": "raid_bdev1", 00:17:27.444 "uuid": "5f3bd32d-8b42-4b5b-b630-d5144c5e20cd", 00:17:27.444 "strip_size_kb": 64, 00:17:27.444 "state": "online", 00:17:27.444 "raid_level": "raid5f", 00:17:27.444 "superblock": true, 00:17:27.444 "num_base_bdevs": 3, 00:17:27.444 "num_base_bdevs_discovered": 3, 00:17:27.444 "num_base_bdevs_operational": 3, 00:17:27.444 "process": { 00:17:27.444 "type": "rebuild", 00:17:27.444 "target": "spare", 00:17:27.444 "progress": { 00:17:27.444 "blocks": 45056, 00:17:27.444 "percent": 35 00:17:27.444 } 00:17:27.444 }, 00:17:27.444 "base_bdevs_list": [ 00:17:27.444 { 00:17:27.444 "name": "spare", 00:17:27.444 "uuid": "e0b1683f-8981-5c05-9bb5-5c165e2d53c5", 00:17:27.444 "is_configured": true, 00:17:27.444 "data_offset": 2048, 00:17:27.444 "data_size": 63488 00:17:27.444 }, 00:17:27.444 { 00:17:27.444 "name": "BaseBdev2", 00:17:27.444 "uuid": "60932cd2-b475-5e57-bd83-c7caa0ea26ab", 00:17:27.444 "is_configured": true, 00:17:27.444 "data_offset": 2048, 00:17:27.444 "data_size": 63488 00:17:27.444 }, 00:17:27.444 { 00:17:27.444 "name": "BaseBdev3", 00:17:27.444 "uuid": "48de6bb9-4b1c-518d-a551-3b8a3ba1e969", 00:17:27.444 "is_configured": true, 00:17:27.444 "data_offset": 2048, 00:17:27.444 "data_size": 63488 00:17:27.444 } 00:17:27.444 ] 00:17:27.444 }' 00:17:27.444 21:44:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:27.444 21:44:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:27.444 21:44:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:27.444 21:44:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:27.444 21:44:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:28.385 21:44:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:28.385 21:44:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:28.385 21:44:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:28.385 21:44:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:28.385 21:44:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:28.385 21:44:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:28.385 21:44:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:28.385 21:44:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:28.385 21:44:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.385 21:44:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:28.385 21:44:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.385 21:44:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:28.385 "name": "raid_bdev1", 00:17:28.385 "uuid": "5f3bd32d-8b42-4b5b-b630-d5144c5e20cd", 00:17:28.385 "strip_size_kb": 64, 00:17:28.385 "state": "online", 00:17:28.385 "raid_level": "raid5f", 00:17:28.385 "superblock": true, 00:17:28.385 "num_base_bdevs": 3, 00:17:28.385 "num_base_bdevs_discovered": 3, 00:17:28.385 "num_base_bdevs_operational": 3, 00:17:28.385 "process": { 00:17:28.385 "type": "rebuild", 00:17:28.385 "target": "spare", 00:17:28.385 "progress": { 00:17:28.385 "blocks": 67584, 00:17:28.385 "percent": 53 00:17:28.385 } 00:17:28.385 }, 00:17:28.385 "base_bdevs_list": [ 00:17:28.385 { 00:17:28.385 "name": "spare", 00:17:28.385 "uuid": "e0b1683f-8981-5c05-9bb5-5c165e2d53c5", 00:17:28.385 "is_configured": true, 00:17:28.385 "data_offset": 2048, 00:17:28.385 "data_size": 63488 00:17:28.385 }, 00:17:28.385 { 00:17:28.385 "name": "BaseBdev2", 00:17:28.385 "uuid": "60932cd2-b475-5e57-bd83-c7caa0ea26ab", 00:17:28.385 "is_configured": true, 00:17:28.385 "data_offset": 2048, 00:17:28.385 "data_size": 63488 00:17:28.385 }, 00:17:28.385 { 00:17:28.385 "name": "BaseBdev3", 00:17:28.385 "uuid": "48de6bb9-4b1c-518d-a551-3b8a3ba1e969", 00:17:28.385 "is_configured": true, 00:17:28.385 "data_offset": 2048, 00:17:28.385 "data_size": 63488 00:17:28.385 } 00:17:28.385 ] 00:17:28.385 }' 00:17:28.385 21:44:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:28.385 21:44:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:28.385 21:44:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:28.385 21:44:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:28.385 21:44:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:29.323 21:44:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:29.324 21:44:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:29.584 21:44:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:29.584 21:44:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:29.584 21:44:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:29.584 21:44:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:29.584 21:44:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:29.584 21:44:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:29.584 21:44:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.584 21:44:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:29.584 21:44:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.584 21:44:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:29.584 "name": "raid_bdev1", 00:17:29.584 "uuid": "5f3bd32d-8b42-4b5b-b630-d5144c5e20cd", 00:17:29.584 "strip_size_kb": 64, 00:17:29.584 "state": "online", 00:17:29.584 "raid_level": "raid5f", 00:17:29.584 "superblock": true, 00:17:29.584 "num_base_bdevs": 3, 00:17:29.584 "num_base_bdevs_discovered": 3, 00:17:29.584 "num_base_bdevs_operational": 3, 00:17:29.584 "process": { 00:17:29.584 "type": "rebuild", 00:17:29.584 "target": "spare", 00:17:29.584 "progress": { 00:17:29.584 "blocks": 92160, 00:17:29.584 "percent": 72 00:17:29.584 } 00:17:29.584 }, 00:17:29.584 "base_bdevs_list": [ 00:17:29.584 { 00:17:29.584 "name": "spare", 00:17:29.584 "uuid": "e0b1683f-8981-5c05-9bb5-5c165e2d53c5", 00:17:29.584 "is_configured": true, 00:17:29.584 "data_offset": 2048, 00:17:29.584 "data_size": 63488 00:17:29.584 }, 00:17:29.584 { 00:17:29.584 "name": "BaseBdev2", 00:17:29.584 "uuid": "60932cd2-b475-5e57-bd83-c7caa0ea26ab", 00:17:29.584 "is_configured": true, 00:17:29.584 "data_offset": 2048, 00:17:29.584 "data_size": 63488 00:17:29.584 }, 00:17:29.584 { 00:17:29.584 "name": "BaseBdev3", 00:17:29.584 "uuid": "48de6bb9-4b1c-518d-a551-3b8a3ba1e969", 00:17:29.584 "is_configured": true, 00:17:29.584 "data_offset": 2048, 00:17:29.584 "data_size": 63488 00:17:29.584 } 00:17:29.584 ] 00:17:29.584 }' 00:17:29.584 21:44:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:29.584 21:44:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:29.584 21:44:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:29.584 21:44:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:29.584 21:44:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:30.622 21:44:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:30.622 21:44:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:30.622 21:44:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:30.622 21:44:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:30.622 21:44:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:30.622 21:44:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:30.622 21:44:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:30.623 21:44:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:30.623 21:44:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.623 21:44:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:30.623 21:44:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.623 21:44:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:30.623 "name": "raid_bdev1", 00:17:30.623 "uuid": "5f3bd32d-8b42-4b5b-b630-d5144c5e20cd", 00:17:30.623 "strip_size_kb": 64, 00:17:30.623 "state": "online", 00:17:30.623 "raid_level": "raid5f", 00:17:30.623 "superblock": true, 00:17:30.623 "num_base_bdevs": 3, 00:17:30.623 "num_base_bdevs_discovered": 3, 00:17:30.623 "num_base_bdevs_operational": 3, 00:17:30.623 "process": { 00:17:30.623 "type": "rebuild", 00:17:30.623 "target": "spare", 00:17:30.623 "progress": { 00:17:30.623 "blocks": 114688, 00:17:30.623 "percent": 90 00:17:30.623 } 00:17:30.623 }, 00:17:30.623 "base_bdevs_list": [ 00:17:30.623 { 00:17:30.623 "name": "spare", 00:17:30.623 "uuid": "e0b1683f-8981-5c05-9bb5-5c165e2d53c5", 00:17:30.623 "is_configured": true, 00:17:30.623 "data_offset": 2048, 00:17:30.623 "data_size": 63488 00:17:30.623 }, 00:17:30.623 { 00:17:30.623 "name": "BaseBdev2", 00:17:30.623 "uuid": "60932cd2-b475-5e57-bd83-c7caa0ea26ab", 00:17:30.623 "is_configured": true, 00:17:30.623 "data_offset": 2048, 00:17:30.623 "data_size": 63488 00:17:30.623 }, 00:17:30.623 { 00:17:30.623 "name": "BaseBdev3", 00:17:30.623 "uuid": "48de6bb9-4b1c-518d-a551-3b8a3ba1e969", 00:17:30.623 "is_configured": true, 00:17:30.623 "data_offset": 2048, 00:17:30.623 "data_size": 63488 00:17:30.623 } 00:17:30.623 ] 00:17:30.623 }' 00:17:30.623 21:44:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:30.623 21:44:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:30.623 21:44:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:30.882 21:44:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:30.882 21:44:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:31.142 [2024-12-10 21:44:31.791589] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:17:31.142 [2024-12-10 21:44:31.791701] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:17:31.142 [2024-12-10 21:44:31.791817] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:31.711 21:44:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:31.711 21:44:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:31.711 21:44:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:31.711 21:44:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:31.711 21:44:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:31.711 21:44:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:31.711 21:44:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:31.711 21:44:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:31.711 21:44:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.711 21:44:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:31.711 21:44:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.711 21:44:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:31.711 "name": "raid_bdev1", 00:17:31.711 "uuid": "5f3bd32d-8b42-4b5b-b630-d5144c5e20cd", 00:17:31.711 "strip_size_kb": 64, 00:17:31.711 "state": "online", 00:17:31.711 "raid_level": "raid5f", 00:17:31.711 "superblock": true, 00:17:31.711 "num_base_bdevs": 3, 00:17:31.711 "num_base_bdevs_discovered": 3, 00:17:31.711 "num_base_bdevs_operational": 3, 00:17:31.711 "base_bdevs_list": [ 00:17:31.711 { 00:17:31.711 "name": "spare", 00:17:31.711 "uuid": "e0b1683f-8981-5c05-9bb5-5c165e2d53c5", 00:17:31.711 "is_configured": true, 00:17:31.711 "data_offset": 2048, 00:17:31.711 "data_size": 63488 00:17:31.711 }, 00:17:31.711 { 00:17:31.711 "name": "BaseBdev2", 00:17:31.711 "uuid": "60932cd2-b475-5e57-bd83-c7caa0ea26ab", 00:17:31.711 "is_configured": true, 00:17:31.711 "data_offset": 2048, 00:17:31.711 "data_size": 63488 00:17:31.711 }, 00:17:31.711 { 00:17:31.711 "name": "BaseBdev3", 00:17:31.711 "uuid": "48de6bb9-4b1c-518d-a551-3b8a3ba1e969", 00:17:31.711 "is_configured": true, 00:17:31.711 "data_offset": 2048, 00:17:31.711 "data_size": 63488 00:17:31.711 } 00:17:31.711 ] 00:17:31.711 }' 00:17:31.711 21:44:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:31.971 21:44:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:17:31.971 21:44:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:31.971 21:44:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:17:31.971 21:44:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:17:31.971 21:44:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:31.971 21:44:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:31.971 21:44:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:31.971 21:44:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:31.971 21:44:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:31.971 21:44:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:31.971 21:44:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:31.971 21:44:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.971 21:44:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:31.971 21:44:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.971 21:44:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:31.971 "name": "raid_bdev1", 00:17:31.971 "uuid": "5f3bd32d-8b42-4b5b-b630-d5144c5e20cd", 00:17:31.971 "strip_size_kb": 64, 00:17:31.971 "state": "online", 00:17:31.971 "raid_level": "raid5f", 00:17:31.971 "superblock": true, 00:17:31.971 "num_base_bdevs": 3, 00:17:31.971 "num_base_bdevs_discovered": 3, 00:17:31.971 "num_base_bdevs_operational": 3, 00:17:31.971 "base_bdevs_list": [ 00:17:31.971 { 00:17:31.971 "name": "spare", 00:17:31.971 "uuid": "e0b1683f-8981-5c05-9bb5-5c165e2d53c5", 00:17:31.971 "is_configured": true, 00:17:31.971 "data_offset": 2048, 00:17:31.971 "data_size": 63488 00:17:31.971 }, 00:17:31.971 { 00:17:31.971 "name": "BaseBdev2", 00:17:31.971 "uuid": "60932cd2-b475-5e57-bd83-c7caa0ea26ab", 00:17:31.971 "is_configured": true, 00:17:31.971 "data_offset": 2048, 00:17:31.971 "data_size": 63488 00:17:31.971 }, 00:17:31.971 { 00:17:31.971 "name": "BaseBdev3", 00:17:31.971 "uuid": "48de6bb9-4b1c-518d-a551-3b8a3ba1e969", 00:17:31.971 "is_configured": true, 00:17:31.971 "data_offset": 2048, 00:17:31.971 "data_size": 63488 00:17:31.971 } 00:17:31.971 ] 00:17:31.971 }' 00:17:31.971 21:44:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:31.971 21:44:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:31.971 21:44:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:31.971 21:44:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:31.971 21:44:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:31.971 21:44:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:31.971 21:44:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:31.971 21:44:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:31.971 21:44:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:31.971 21:44:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:31.971 21:44:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:31.971 21:44:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:31.971 21:44:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:31.971 21:44:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:31.971 21:44:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:31.971 21:44:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:31.971 21:44:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.971 21:44:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:31.971 21:44:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.231 21:44:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:32.231 "name": "raid_bdev1", 00:17:32.231 "uuid": "5f3bd32d-8b42-4b5b-b630-d5144c5e20cd", 00:17:32.231 "strip_size_kb": 64, 00:17:32.231 "state": "online", 00:17:32.231 "raid_level": "raid5f", 00:17:32.231 "superblock": true, 00:17:32.231 "num_base_bdevs": 3, 00:17:32.231 "num_base_bdevs_discovered": 3, 00:17:32.231 "num_base_bdevs_operational": 3, 00:17:32.231 "base_bdevs_list": [ 00:17:32.231 { 00:17:32.231 "name": "spare", 00:17:32.231 "uuid": "e0b1683f-8981-5c05-9bb5-5c165e2d53c5", 00:17:32.231 "is_configured": true, 00:17:32.231 "data_offset": 2048, 00:17:32.231 "data_size": 63488 00:17:32.231 }, 00:17:32.231 { 00:17:32.231 "name": "BaseBdev2", 00:17:32.231 "uuid": "60932cd2-b475-5e57-bd83-c7caa0ea26ab", 00:17:32.231 "is_configured": true, 00:17:32.231 "data_offset": 2048, 00:17:32.231 "data_size": 63488 00:17:32.231 }, 00:17:32.231 { 00:17:32.231 "name": "BaseBdev3", 00:17:32.231 "uuid": "48de6bb9-4b1c-518d-a551-3b8a3ba1e969", 00:17:32.231 "is_configured": true, 00:17:32.231 "data_offset": 2048, 00:17:32.231 "data_size": 63488 00:17:32.231 } 00:17:32.231 ] 00:17:32.231 }' 00:17:32.231 21:44:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:32.231 21:44:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:32.490 21:44:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:32.490 21:44:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.490 21:44:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:32.490 [2024-12-10 21:44:33.186502] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:32.490 [2024-12-10 21:44:33.186541] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:32.490 [2024-12-10 21:44:33.186637] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:32.490 [2024-12-10 21:44:33.186737] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:32.490 [2024-12-10 21:44:33.186771] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:32.490 21:44:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.490 21:44:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:17:32.490 21:44:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:32.490 21:44:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.490 21:44:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:32.490 21:44:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.490 21:44:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:17:32.490 21:44:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:17:32.490 21:44:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:17:32.490 21:44:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:17:32.490 21:44:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:32.490 21:44:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:17:32.490 21:44:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:32.490 21:44:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:32.490 21:44:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:32.490 21:44:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:17:32.490 21:44:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:32.490 21:44:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:32.490 21:44:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:17:32.801 /dev/nbd0 00:17:32.801 21:44:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:32.801 21:44:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:32.801 21:44:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:32.801 21:44:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:17:32.801 21:44:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:32.801 21:44:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:32.801 21:44:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:32.801 21:44:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:17:32.801 21:44:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:32.801 21:44:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:32.801 21:44:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:32.801 1+0 records in 00:17:32.801 1+0 records out 00:17:32.801 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000259513 s, 15.8 MB/s 00:17:32.801 21:44:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:32.801 21:44:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:17:32.801 21:44:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:32.801 21:44:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:32.801 21:44:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:17:32.801 21:44:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:32.801 21:44:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:32.801 21:44:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:17:33.061 /dev/nbd1 00:17:33.061 21:44:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:33.061 21:44:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:33.061 21:44:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:17:33.061 21:44:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:17:33.061 21:44:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:33.061 21:44:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:33.061 21:44:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:17:33.061 21:44:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:17:33.061 21:44:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:33.061 21:44:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:33.061 21:44:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:33.061 1+0 records in 00:17:33.061 1+0 records out 00:17:33.061 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000402757 s, 10.2 MB/s 00:17:33.061 21:44:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:33.061 21:44:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:17:33.061 21:44:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:33.061 21:44:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:33.061 21:44:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:17:33.061 21:44:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:33.061 21:44:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:33.061 21:44:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:17:33.321 21:44:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:17:33.321 21:44:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:33.321 21:44:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:33.321 21:44:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:33.321 21:44:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:17:33.321 21:44:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:33.321 21:44:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:33.581 21:44:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:33.581 21:44:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:33.581 21:44:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:33.581 21:44:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:33.581 21:44:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:33.581 21:44:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:33.581 21:44:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:17:33.581 21:44:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:17:33.581 21:44:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:33.581 21:44:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:17:33.581 21:44:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:33.581 21:44:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:33.581 21:44:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:33.581 21:44:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:33.581 21:44:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:33.581 21:44:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:33.581 21:44:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:17:33.581 21:44:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:17:33.581 21:44:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:17:33.581 21:44:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:17:33.581 21:44:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.581 21:44:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:33.581 21:44:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.581 21:44:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:33.581 21:44:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.581 21:44:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:33.581 [2024-12-10 21:44:34.355551] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:33.581 [2024-12-10 21:44:34.355627] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:33.581 [2024-12-10 21:44:34.355652] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:17:33.581 [2024-12-10 21:44:34.355665] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:33.581 [2024-12-10 21:44:34.358215] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:33.581 [2024-12-10 21:44:34.358256] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:33.581 [2024-12-10 21:44:34.358346] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:33.581 [2024-12-10 21:44:34.358395] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:33.581 [2024-12-10 21:44:34.358579] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:33.581 [2024-12-10 21:44:34.358706] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:33.581 spare 00:17:33.581 21:44:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.581 21:44:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:17:33.581 21:44:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.581 21:44:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:33.840 [2024-12-10 21:44:34.458625] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:17:33.840 [2024-12-10 21:44:34.458660] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:17:33.840 [2024-12-10 21:44:34.458953] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000047700 00:17:33.840 [2024-12-10 21:44:34.464941] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:17:33.840 [2024-12-10 21:44:34.464967] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:17:33.840 [2024-12-10 21:44:34.465154] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:33.840 21:44:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.840 21:44:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:33.840 21:44:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:33.840 21:44:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:33.840 21:44:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:33.840 21:44:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:33.840 21:44:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:33.840 21:44:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:33.840 21:44:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:33.840 21:44:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:33.840 21:44:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:33.840 21:44:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:33.840 21:44:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:33.840 21:44:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.840 21:44:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:33.840 21:44:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.840 21:44:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:33.840 "name": "raid_bdev1", 00:17:33.840 "uuid": "5f3bd32d-8b42-4b5b-b630-d5144c5e20cd", 00:17:33.840 "strip_size_kb": 64, 00:17:33.840 "state": "online", 00:17:33.840 "raid_level": "raid5f", 00:17:33.840 "superblock": true, 00:17:33.840 "num_base_bdevs": 3, 00:17:33.840 "num_base_bdevs_discovered": 3, 00:17:33.840 "num_base_bdevs_operational": 3, 00:17:33.840 "base_bdevs_list": [ 00:17:33.840 { 00:17:33.840 "name": "spare", 00:17:33.840 "uuid": "e0b1683f-8981-5c05-9bb5-5c165e2d53c5", 00:17:33.840 "is_configured": true, 00:17:33.840 "data_offset": 2048, 00:17:33.840 "data_size": 63488 00:17:33.840 }, 00:17:33.840 { 00:17:33.840 "name": "BaseBdev2", 00:17:33.840 "uuid": "60932cd2-b475-5e57-bd83-c7caa0ea26ab", 00:17:33.840 "is_configured": true, 00:17:33.840 "data_offset": 2048, 00:17:33.840 "data_size": 63488 00:17:33.840 }, 00:17:33.840 { 00:17:33.840 "name": "BaseBdev3", 00:17:33.840 "uuid": "48de6bb9-4b1c-518d-a551-3b8a3ba1e969", 00:17:33.840 "is_configured": true, 00:17:33.840 "data_offset": 2048, 00:17:33.840 "data_size": 63488 00:17:33.840 } 00:17:33.840 ] 00:17:33.840 }' 00:17:33.840 21:44:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:33.840 21:44:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:34.098 21:44:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:34.098 21:44:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:34.098 21:44:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:34.099 21:44:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:34.099 21:44:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:34.099 21:44:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:34.099 21:44:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:34.099 21:44:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.099 21:44:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:34.358 21:44:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.358 21:44:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:34.358 "name": "raid_bdev1", 00:17:34.358 "uuid": "5f3bd32d-8b42-4b5b-b630-d5144c5e20cd", 00:17:34.358 "strip_size_kb": 64, 00:17:34.358 "state": "online", 00:17:34.358 "raid_level": "raid5f", 00:17:34.358 "superblock": true, 00:17:34.358 "num_base_bdevs": 3, 00:17:34.358 "num_base_bdevs_discovered": 3, 00:17:34.358 "num_base_bdevs_operational": 3, 00:17:34.358 "base_bdevs_list": [ 00:17:34.358 { 00:17:34.358 "name": "spare", 00:17:34.358 "uuid": "e0b1683f-8981-5c05-9bb5-5c165e2d53c5", 00:17:34.358 "is_configured": true, 00:17:34.358 "data_offset": 2048, 00:17:34.358 "data_size": 63488 00:17:34.358 }, 00:17:34.358 { 00:17:34.358 "name": "BaseBdev2", 00:17:34.358 "uuid": "60932cd2-b475-5e57-bd83-c7caa0ea26ab", 00:17:34.358 "is_configured": true, 00:17:34.358 "data_offset": 2048, 00:17:34.358 "data_size": 63488 00:17:34.358 }, 00:17:34.358 { 00:17:34.358 "name": "BaseBdev3", 00:17:34.358 "uuid": "48de6bb9-4b1c-518d-a551-3b8a3ba1e969", 00:17:34.358 "is_configured": true, 00:17:34.358 "data_offset": 2048, 00:17:34.358 "data_size": 63488 00:17:34.358 } 00:17:34.358 ] 00:17:34.358 }' 00:17:34.358 21:44:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:34.358 21:44:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:34.358 21:44:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:34.358 21:44:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:34.358 21:44:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:34.358 21:44:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.358 21:44:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:17:34.358 21:44:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:34.358 21:44:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.358 21:44:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:17:34.358 21:44:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:34.358 21:44:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.358 21:44:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:34.358 [2024-12-10 21:44:35.046970] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:34.358 21:44:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.358 21:44:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:17:34.358 21:44:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:34.358 21:44:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:34.358 21:44:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:34.358 21:44:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:34.358 21:44:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:34.358 21:44:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:34.358 21:44:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:34.358 21:44:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:34.358 21:44:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:34.358 21:44:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:34.358 21:44:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:34.358 21:44:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.358 21:44:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:34.358 21:44:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.359 21:44:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:34.359 "name": "raid_bdev1", 00:17:34.359 "uuid": "5f3bd32d-8b42-4b5b-b630-d5144c5e20cd", 00:17:34.359 "strip_size_kb": 64, 00:17:34.359 "state": "online", 00:17:34.359 "raid_level": "raid5f", 00:17:34.359 "superblock": true, 00:17:34.359 "num_base_bdevs": 3, 00:17:34.359 "num_base_bdevs_discovered": 2, 00:17:34.359 "num_base_bdevs_operational": 2, 00:17:34.359 "base_bdevs_list": [ 00:17:34.359 { 00:17:34.359 "name": null, 00:17:34.359 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:34.359 "is_configured": false, 00:17:34.359 "data_offset": 0, 00:17:34.359 "data_size": 63488 00:17:34.359 }, 00:17:34.359 { 00:17:34.359 "name": "BaseBdev2", 00:17:34.359 "uuid": "60932cd2-b475-5e57-bd83-c7caa0ea26ab", 00:17:34.359 "is_configured": true, 00:17:34.359 "data_offset": 2048, 00:17:34.359 "data_size": 63488 00:17:34.359 }, 00:17:34.359 { 00:17:34.359 "name": "BaseBdev3", 00:17:34.359 "uuid": "48de6bb9-4b1c-518d-a551-3b8a3ba1e969", 00:17:34.359 "is_configured": true, 00:17:34.359 "data_offset": 2048, 00:17:34.359 "data_size": 63488 00:17:34.359 } 00:17:34.359 ] 00:17:34.359 }' 00:17:34.359 21:44:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:34.359 21:44:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:34.927 21:44:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:34.927 21:44:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.927 21:44:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:34.927 [2024-12-10 21:44:35.458313] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:34.927 [2024-12-10 21:44:35.458544] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:34.927 [2024-12-10 21:44:35.458574] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:34.927 [2024-12-10 21:44:35.458616] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:34.927 [2024-12-10 21:44:35.475832] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000477d0 00:17:34.927 21:44:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.927 21:44:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:17:34.927 [2024-12-10 21:44:35.483877] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:35.866 21:44:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:35.866 21:44:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:35.866 21:44:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:35.866 21:44:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:35.866 21:44:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:35.866 21:44:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:35.866 21:44:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:35.866 21:44:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.866 21:44:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:35.866 21:44:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.866 21:44:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:35.866 "name": "raid_bdev1", 00:17:35.866 "uuid": "5f3bd32d-8b42-4b5b-b630-d5144c5e20cd", 00:17:35.866 "strip_size_kb": 64, 00:17:35.866 "state": "online", 00:17:35.866 "raid_level": "raid5f", 00:17:35.866 "superblock": true, 00:17:35.866 "num_base_bdevs": 3, 00:17:35.866 "num_base_bdevs_discovered": 3, 00:17:35.866 "num_base_bdevs_operational": 3, 00:17:35.866 "process": { 00:17:35.866 "type": "rebuild", 00:17:35.866 "target": "spare", 00:17:35.866 "progress": { 00:17:35.866 "blocks": 20480, 00:17:35.866 "percent": 16 00:17:35.866 } 00:17:35.866 }, 00:17:35.866 "base_bdevs_list": [ 00:17:35.866 { 00:17:35.866 "name": "spare", 00:17:35.866 "uuid": "e0b1683f-8981-5c05-9bb5-5c165e2d53c5", 00:17:35.866 "is_configured": true, 00:17:35.866 "data_offset": 2048, 00:17:35.866 "data_size": 63488 00:17:35.866 }, 00:17:35.866 { 00:17:35.866 "name": "BaseBdev2", 00:17:35.866 "uuid": "60932cd2-b475-5e57-bd83-c7caa0ea26ab", 00:17:35.866 "is_configured": true, 00:17:35.866 "data_offset": 2048, 00:17:35.866 "data_size": 63488 00:17:35.866 }, 00:17:35.866 { 00:17:35.866 "name": "BaseBdev3", 00:17:35.866 "uuid": "48de6bb9-4b1c-518d-a551-3b8a3ba1e969", 00:17:35.866 "is_configured": true, 00:17:35.866 "data_offset": 2048, 00:17:35.866 "data_size": 63488 00:17:35.866 } 00:17:35.866 ] 00:17:35.866 }' 00:17:35.866 21:44:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:35.866 21:44:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:35.866 21:44:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:35.866 21:44:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:35.866 21:44:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:17:35.866 21:44:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.866 21:44:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:35.866 [2024-12-10 21:44:36.622876] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:36.125 [2024-12-10 21:44:36.692934] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:36.125 [2024-12-10 21:44:36.693013] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:36.125 [2024-12-10 21:44:36.693031] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:36.125 [2024-12-10 21:44:36.693041] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:36.125 21:44:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.125 21:44:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:17:36.125 21:44:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:36.125 21:44:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:36.125 21:44:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:36.126 21:44:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:36.126 21:44:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:36.126 21:44:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:36.126 21:44:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:36.126 21:44:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:36.126 21:44:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:36.126 21:44:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:36.126 21:44:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.126 21:44:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:36.126 21:44:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:36.126 21:44:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.126 21:44:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:36.126 "name": "raid_bdev1", 00:17:36.126 "uuid": "5f3bd32d-8b42-4b5b-b630-d5144c5e20cd", 00:17:36.126 "strip_size_kb": 64, 00:17:36.126 "state": "online", 00:17:36.126 "raid_level": "raid5f", 00:17:36.126 "superblock": true, 00:17:36.126 "num_base_bdevs": 3, 00:17:36.126 "num_base_bdevs_discovered": 2, 00:17:36.126 "num_base_bdevs_operational": 2, 00:17:36.126 "base_bdevs_list": [ 00:17:36.126 { 00:17:36.126 "name": null, 00:17:36.126 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:36.126 "is_configured": false, 00:17:36.126 "data_offset": 0, 00:17:36.126 "data_size": 63488 00:17:36.126 }, 00:17:36.126 { 00:17:36.126 "name": "BaseBdev2", 00:17:36.126 "uuid": "60932cd2-b475-5e57-bd83-c7caa0ea26ab", 00:17:36.126 "is_configured": true, 00:17:36.126 "data_offset": 2048, 00:17:36.126 "data_size": 63488 00:17:36.126 }, 00:17:36.126 { 00:17:36.126 "name": "BaseBdev3", 00:17:36.126 "uuid": "48de6bb9-4b1c-518d-a551-3b8a3ba1e969", 00:17:36.126 "is_configured": true, 00:17:36.126 "data_offset": 2048, 00:17:36.126 "data_size": 63488 00:17:36.126 } 00:17:36.126 ] 00:17:36.126 }' 00:17:36.126 21:44:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:36.126 21:44:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:36.385 21:44:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:36.385 21:44:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.385 21:44:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:36.385 [2024-12-10 21:44:37.151125] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:36.385 [2024-12-10 21:44:37.151195] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:36.385 [2024-12-10 21:44:37.151216] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:17:36.385 [2024-12-10 21:44:37.151230] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:36.385 [2024-12-10 21:44:37.151778] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:36.385 [2024-12-10 21:44:37.151810] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:36.385 [2024-12-10 21:44:37.151922] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:36.385 [2024-12-10 21:44:37.151946] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:36.385 [2024-12-10 21:44:37.151958] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:36.385 [2024-12-10 21:44:37.151982] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:36.645 [2024-12-10 21:44:37.167931] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000478a0 00:17:36.645 spare 00:17:36.645 21:44:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.645 21:44:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:17:36.645 [2024-12-10 21:44:37.175406] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:37.583 21:44:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:37.583 21:44:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:37.583 21:44:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:37.583 21:44:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:37.583 21:44:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:37.583 21:44:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:37.583 21:44:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:37.583 21:44:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.583 21:44:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:37.583 21:44:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.583 21:44:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:37.583 "name": "raid_bdev1", 00:17:37.583 "uuid": "5f3bd32d-8b42-4b5b-b630-d5144c5e20cd", 00:17:37.583 "strip_size_kb": 64, 00:17:37.583 "state": "online", 00:17:37.583 "raid_level": "raid5f", 00:17:37.583 "superblock": true, 00:17:37.583 "num_base_bdevs": 3, 00:17:37.583 "num_base_bdevs_discovered": 3, 00:17:37.583 "num_base_bdevs_operational": 3, 00:17:37.583 "process": { 00:17:37.583 "type": "rebuild", 00:17:37.583 "target": "spare", 00:17:37.583 "progress": { 00:17:37.583 "blocks": 20480, 00:17:37.583 "percent": 16 00:17:37.583 } 00:17:37.583 }, 00:17:37.583 "base_bdevs_list": [ 00:17:37.583 { 00:17:37.583 "name": "spare", 00:17:37.583 "uuid": "e0b1683f-8981-5c05-9bb5-5c165e2d53c5", 00:17:37.583 "is_configured": true, 00:17:37.583 "data_offset": 2048, 00:17:37.583 "data_size": 63488 00:17:37.583 }, 00:17:37.583 { 00:17:37.583 "name": "BaseBdev2", 00:17:37.583 "uuid": "60932cd2-b475-5e57-bd83-c7caa0ea26ab", 00:17:37.583 "is_configured": true, 00:17:37.583 "data_offset": 2048, 00:17:37.583 "data_size": 63488 00:17:37.583 }, 00:17:37.583 { 00:17:37.583 "name": "BaseBdev3", 00:17:37.583 "uuid": "48de6bb9-4b1c-518d-a551-3b8a3ba1e969", 00:17:37.583 "is_configured": true, 00:17:37.583 "data_offset": 2048, 00:17:37.583 "data_size": 63488 00:17:37.583 } 00:17:37.583 ] 00:17:37.583 }' 00:17:37.583 21:44:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:37.583 21:44:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:37.583 21:44:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:37.583 21:44:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:37.583 21:44:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:17:37.583 21:44:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.583 21:44:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:37.583 [2024-12-10 21:44:38.314871] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:37.843 [2024-12-10 21:44:38.384612] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:37.843 [2024-12-10 21:44:38.384673] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:37.843 [2024-12-10 21:44:38.384691] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:37.843 [2024-12-10 21:44:38.384698] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:37.843 21:44:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.843 21:44:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:17:37.843 21:44:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:37.843 21:44:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:37.843 21:44:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:37.843 21:44:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:37.843 21:44:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:37.843 21:44:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:37.843 21:44:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:37.843 21:44:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:37.843 21:44:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:37.843 21:44:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:37.843 21:44:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.843 21:44:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:37.843 21:44:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:37.843 21:44:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.843 21:44:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:37.843 "name": "raid_bdev1", 00:17:37.843 "uuid": "5f3bd32d-8b42-4b5b-b630-d5144c5e20cd", 00:17:37.843 "strip_size_kb": 64, 00:17:37.843 "state": "online", 00:17:37.843 "raid_level": "raid5f", 00:17:37.843 "superblock": true, 00:17:37.843 "num_base_bdevs": 3, 00:17:37.843 "num_base_bdevs_discovered": 2, 00:17:37.843 "num_base_bdevs_operational": 2, 00:17:37.843 "base_bdevs_list": [ 00:17:37.843 { 00:17:37.843 "name": null, 00:17:37.843 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:37.843 "is_configured": false, 00:17:37.843 "data_offset": 0, 00:17:37.843 "data_size": 63488 00:17:37.843 }, 00:17:37.843 { 00:17:37.843 "name": "BaseBdev2", 00:17:37.843 "uuid": "60932cd2-b475-5e57-bd83-c7caa0ea26ab", 00:17:37.843 "is_configured": true, 00:17:37.843 "data_offset": 2048, 00:17:37.843 "data_size": 63488 00:17:37.843 }, 00:17:37.843 { 00:17:37.843 "name": "BaseBdev3", 00:17:37.843 "uuid": "48de6bb9-4b1c-518d-a551-3b8a3ba1e969", 00:17:37.843 "is_configured": true, 00:17:37.843 "data_offset": 2048, 00:17:37.843 "data_size": 63488 00:17:37.843 } 00:17:37.843 ] 00:17:37.843 }' 00:17:37.843 21:44:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:37.843 21:44:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:38.102 21:44:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:38.102 21:44:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:38.102 21:44:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:38.102 21:44:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:38.102 21:44:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:38.102 21:44:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:38.102 21:44:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:38.102 21:44:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.102 21:44:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:38.362 21:44:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.362 21:44:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:38.362 "name": "raid_bdev1", 00:17:38.362 "uuid": "5f3bd32d-8b42-4b5b-b630-d5144c5e20cd", 00:17:38.362 "strip_size_kb": 64, 00:17:38.362 "state": "online", 00:17:38.362 "raid_level": "raid5f", 00:17:38.362 "superblock": true, 00:17:38.362 "num_base_bdevs": 3, 00:17:38.362 "num_base_bdevs_discovered": 2, 00:17:38.362 "num_base_bdevs_operational": 2, 00:17:38.362 "base_bdevs_list": [ 00:17:38.362 { 00:17:38.362 "name": null, 00:17:38.362 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:38.362 "is_configured": false, 00:17:38.362 "data_offset": 0, 00:17:38.362 "data_size": 63488 00:17:38.362 }, 00:17:38.362 { 00:17:38.362 "name": "BaseBdev2", 00:17:38.362 "uuid": "60932cd2-b475-5e57-bd83-c7caa0ea26ab", 00:17:38.362 "is_configured": true, 00:17:38.362 "data_offset": 2048, 00:17:38.362 "data_size": 63488 00:17:38.362 }, 00:17:38.362 { 00:17:38.362 "name": "BaseBdev3", 00:17:38.362 "uuid": "48de6bb9-4b1c-518d-a551-3b8a3ba1e969", 00:17:38.362 "is_configured": true, 00:17:38.362 "data_offset": 2048, 00:17:38.362 "data_size": 63488 00:17:38.362 } 00:17:38.362 ] 00:17:38.362 }' 00:17:38.362 21:44:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:38.362 21:44:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:38.362 21:44:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:38.362 21:44:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:38.362 21:44:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:17:38.362 21:44:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.362 21:44:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:38.362 21:44:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.362 21:44:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:38.362 21:44:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.362 21:44:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:38.362 [2024-12-10 21:44:39.011001] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:38.362 [2024-12-10 21:44:39.011059] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:38.362 [2024-12-10 21:44:39.011092] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:17:38.362 [2024-12-10 21:44:39.011101] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:38.362 [2024-12-10 21:44:39.011590] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:38.362 [2024-12-10 21:44:39.011619] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:38.362 [2024-12-10 21:44:39.011713] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:17:38.362 [2024-12-10 21:44:39.011736] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:38.362 [2024-12-10 21:44:39.011758] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:38.362 [2024-12-10 21:44:39.011770] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:17:38.362 BaseBdev1 00:17:38.362 21:44:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.362 21:44:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:17:39.299 21:44:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:17:39.299 21:44:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:39.299 21:44:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:39.299 21:44:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:39.299 21:44:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:39.299 21:44:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:39.299 21:44:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:39.299 21:44:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:39.299 21:44:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:39.299 21:44:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:39.299 21:44:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:39.299 21:44:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:39.299 21:44:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.299 21:44:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:39.299 21:44:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.299 21:44:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:39.299 "name": "raid_bdev1", 00:17:39.299 "uuid": "5f3bd32d-8b42-4b5b-b630-d5144c5e20cd", 00:17:39.299 "strip_size_kb": 64, 00:17:39.299 "state": "online", 00:17:39.299 "raid_level": "raid5f", 00:17:39.299 "superblock": true, 00:17:39.299 "num_base_bdevs": 3, 00:17:39.299 "num_base_bdevs_discovered": 2, 00:17:39.299 "num_base_bdevs_operational": 2, 00:17:39.299 "base_bdevs_list": [ 00:17:39.299 { 00:17:39.299 "name": null, 00:17:39.299 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:39.299 "is_configured": false, 00:17:39.299 "data_offset": 0, 00:17:39.299 "data_size": 63488 00:17:39.299 }, 00:17:39.299 { 00:17:39.299 "name": "BaseBdev2", 00:17:39.299 "uuid": "60932cd2-b475-5e57-bd83-c7caa0ea26ab", 00:17:39.299 "is_configured": true, 00:17:39.299 "data_offset": 2048, 00:17:39.299 "data_size": 63488 00:17:39.299 }, 00:17:39.299 { 00:17:39.299 "name": "BaseBdev3", 00:17:39.299 "uuid": "48de6bb9-4b1c-518d-a551-3b8a3ba1e969", 00:17:39.299 "is_configured": true, 00:17:39.299 "data_offset": 2048, 00:17:39.299 "data_size": 63488 00:17:39.299 } 00:17:39.299 ] 00:17:39.299 }' 00:17:39.299 21:44:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:39.300 21:44:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:39.869 21:44:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:39.869 21:44:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:39.869 21:44:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:39.869 21:44:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:39.869 21:44:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:39.869 21:44:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:39.869 21:44:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:39.869 21:44:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.869 21:44:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:39.869 21:44:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.869 21:44:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:39.869 "name": "raid_bdev1", 00:17:39.869 "uuid": "5f3bd32d-8b42-4b5b-b630-d5144c5e20cd", 00:17:39.870 "strip_size_kb": 64, 00:17:39.870 "state": "online", 00:17:39.870 "raid_level": "raid5f", 00:17:39.870 "superblock": true, 00:17:39.870 "num_base_bdevs": 3, 00:17:39.870 "num_base_bdevs_discovered": 2, 00:17:39.870 "num_base_bdevs_operational": 2, 00:17:39.870 "base_bdevs_list": [ 00:17:39.870 { 00:17:39.870 "name": null, 00:17:39.870 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:39.870 "is_configured": false, 00:17:39.870 "data_offset": 0, 00:17:39.870 "data_size": 63488 00:17:39.870 }, 00:17:39.870 { 00:17:39.870 "name": "BaseBdev2", 00:17:39.870 "uuid": "60932cd2-b475-5e57-bd83-c7caa0ea26ab", 00:17:39.870 "is_configured": true, 00:17:39.870 "data_offset": 2048, 00:17:39.870 "data_size": 63488 00:17:39.870 }, 00:17:39.870 { 00:17:39.870 "name": "BaseBdev3", 00:17:39.870 "uuid": "48de6bb9-4b1c-518d-a551-3b8a3ba1e969", 00:17:39.870 "is_configured": true, 00:17:39.870 "data_offset": 2048, 00:17:39.870 "data_size": 63488 00:17:39.870 } 00:17:39.870 ] 00:17:39.870 }' 00:17:39.870 21:44:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:39.870 21:44:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:39.870 21:44:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:39.870 21:44:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:39.870 21:44:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:39.870 21:44:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:17:39.870 21:44:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:39.870 21:44:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:39.870 21:44:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:39.870 21:44:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:39.870 21:44:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:39.870 21:44:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:39.870 21:44:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.870 21:44:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:39.870 [2024-12-10 21:44:40.620333] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:39.870 [2024-12-10 21:44:40.620525] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:39.870 [2024-12-10 21:44:40.620550] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:39.870 request: 00:17:39.870 { 00:17:39.870 "base_bdev": "BaseBdev1", 00:17:39.870 "raid_bdev": "raid_bdev1", 00:17:39.870 "method": "bdev_raid_add_base_bdev", 00:17:39.870 "req_id": 1 00:17:39.870 } 00:17:39.870 Got JSON-RPC error response 00:17:39.870 response: 00:17:39.870 { 00:17:39.870 "code": -22, 00:17:39.870 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:17:39.870 } 00:17:39.870 21:44:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:39.870 21:44:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:17:39.870 21:44:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:39.870 21:44:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:39.870 21:44:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:39.870 21:44:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:17:41.251 21:44:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:17:41.251 21:44:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:41.251 21:44:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:41.251 21:44:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:41.251 21:44:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:41.251 21:44:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:41.251 21:44:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:41.251 21:44:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:41.251 21:44:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:41.251 21:44:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:41.251 21:44:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:41.251 21:44:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:41.251 21:44:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.251 21:44:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:41.251 21:44:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.251 21:44:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:41.251 "name": "raid_bdev1", 00:17:41.251 "uuid": "5f3bd32d-8b42-4b5b-b630-d5144c5e20cd", 00:17:41.251 "strip_size_kb": 64, 00:17:41.251 "state": "online", 00:17:41.251 "raid_level": "raid5f", 00:17:41.251 "superblock": true, 00:17:41.251 "num_base_bdevs": 3, 00:17:41.251 "num_base_bdevs_discovered": 2, 00:17:41.251 "num_base_bdevs_operational": 2, 00:17:41.251 "base_bdevs_list": [ 00:17:41.251 { 00:17:41.251 "name": null, 00:17:41.251 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:41.251 "is_configured": false, 00:17:41.251 "data_offset": 0, 00:17:41.251 "data_size": 63488 00:17:41.251 }, 00:17:41.251 { 00:17:41.251 "name": "BaseBdev2", 00:17:41.251 "uuid": "60932cd2-b475-5e57-bd83-c7caa0ea26ab", 00:17:41.251 "is_configured": true, 00:17:41.251 "data_offset": 2048, 00:17:41.251 "data_size": 63488 00:17:41.251 }, 00:17:41.251 { 00:17:41.251 "name": "BaseBdev3", 00:17:41.251 "uuid": "48de6bb9-4b1c-518d-a551-3b8a3ba1e969", 00:17:41.251 "is_configured": true, 00:17:41.251 "data_offset": 2048, 00:17:41.251 "data_size": 63488 00:17:41.251 } 00:17:41.251 ] 00:17:41.251 }' 00:17:41.251 21:44:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:41.251 21:44:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:41.511 21:44:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:41.511 21:44:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:41.511 21:44:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:41.511 21:44:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:41.511 21:44:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:41.511 21:44:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:41.511 21:44:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:41.511 21:44:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.511 21:44:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:41.511 21:44:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.511 21:44:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:41.511 "name": "raid_bdev1", 00:17:41.511 "uuid": "5f3bd32d-8b42-4b5b-b630-d5144c5e20cd", 00:17:41.511 "strip_size_kb": 64, 00:17:41.511 "state": "online", 00:17:41.511 "raid_level": "raid5f", 00:17:41.511 "superblock": true, 00:17:41.511 "num_base_bdevs": 3, 00:17:41.511 "num_base_bdevs_discovered": 2, 00:17:41.511 "num_base_bdevs_operational": 2, 00:17:41.511 "base_bdevs_list": [ 00:17:41.511 { 00:17:41.511 "name": null, 00:17:41.511 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:41.511 "is_configured": false, 00:17:41.511 "data_offset": 0, 00:17:41.511 "data_size": 63488 00:17:41.511 }, 00:17:41.511 { 00:17:41.511 "name": "BaseBdev2", 00:17:41.511 "uuid": "60932cd2-b475-5e57-bd83-c7caa0ea26ab", 00:17:41.511 "is_configured": true, 00:17:41.511 "data_offset": 2048, 00:17:41.511 "data_size": 63488 00:17:41.511 }, 00:17:41.511 { 00:17:41.511 "name": "BaseBdev3", 00:17:41.511 "uuid": "48de6bb9-4b1c-518d-a551-3b8a3ba1e969", 00:17:41.511 "is_configured": true, 00:17:41.511 "data_offset": 2048, 00:17:41.511 "data_size": 63488 00:17:41.511 } 00:17:41.511 ] 00:17:41.511 }' 00:17:41.511 21:44:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:41.511 21:44:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:41.511 21:44:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:41.511 21:44:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:41.511 21:44:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 82208 00:17:41.511 21:44:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 82208 ']' 00:17:41.511 21:44:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 82208 00:17:41.511 21:44:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:17:41.511 21:44:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:41.511 21:44:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82208 00:17:41.511 killing process with pid 82208 00:17:41.511 Received shutdown signal, test time was about 60.000000 seconds 00:17:41.511 00:17:41.511 Latency(us) 00:17:41.511 [2024-12-10T21:44:42.294Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:41.511 [2024-12-10T21:44:42.294Z] =================================================================================================================== 00:17:41.511 [2024-12-10T21:44:42.294Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:41.511 21:44:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:41.511 21:44:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:41.511 21:44:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82208' 00:17:41.511 21:44:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 82208 00:17:41.511 [2024-12-10 21:44:42.242046] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:41.511 [2024-12-10 21:44:42.242175] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:41.511 21:44:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 82208 00:17:41.511 [2024-12-10 21:44:42.242247] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:41.511 [2024-12-10 21:44:42.242260] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:17:42.078 [2024-12-10 21:44:42.628039] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:43.015 21:44:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:17:43.015 00:17:43.015 real 0m23.265s 00:17:43.015 user 0m29.803s 00:17:43.015 sys 0m2.743s 00:17:43.015 21:44:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:43.015 ************************************ 00:17:43.015 21:44:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:43.015 END TEST raid5f_rebuild_test_sb 00:17:43.015 ************************************ 00:17:43.015 21:44:43 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:17:43.015 21:44:43 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 4 false 00:17:43.015 21:44:43 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:17:43.015 21:44:43 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:43.015 21:44:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:43.015 ************************************ 00:17:43.015 START TEST raid5f_state_function_test 00:17:43.015 ************************************ 00:17:43.015 21:44:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 4 false 00:17:43.015 21:44:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:17:43.015 21:44:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:17:43.015 21:44:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:17:43.015 21:44:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:17:43.016 21:44:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:17:43.016 21:44:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:43.016 21:44:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:17:43.016 21:44:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:43.016 21:44:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:43.016 21:44:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:17:43.016 21:44:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:43.016 21:44:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:43.016 21:44:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:17:43.016 21:44:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:43.016 21:44:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:43.016 21:44:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:17:43.016 21:44:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:43.016 21:44:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:43.016 21:44:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:17:43.016 21:44:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:17:43.016 21:44:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:17:43.016 21:44:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:17:43.016 21:44:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:17:43.016 21:44:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:17:43.016 21:44:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:17:43.016 21:44:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:17:43.016 21:44:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:17:43.016 21:44:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:17:43.016 21:44:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:17:43.016 21:44:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=82959 00:17:43.016 21:44:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:17:43.016 Process raid pid: 82959 00:17:43.016 21:44:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 82959' 00:17:43.016 21:44:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 82959 00:17:43.016 21:44:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 82959 ']' 00:17:43.016 21:44:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:43.016 21:44:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:43.016 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:43.016 21:44:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:43.016 21:44:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:43.016 21:44:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:43.275 [2024-12-10 21:44:43.850524] Starting SPDK v25.01-pre git sha1 cec5ba284 / DPDK 24.03.0 initialization... 00:17:43.275 [2024-12-10 21:44:43.850633] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:43.275 [2024-12-10 21:44:44.007751] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:43.534 [2024-12-10 21:44:44.124649] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:17:43.793 [2024-12-10 21:44:44.318968] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:43.793 [2024-12-10 21:44:44.319009] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:44.052 21:44:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:44.052 21:44:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:17:44.052 21:44:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:17:44.052 21:44:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.052 21:44:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:44.052 [2024-12-10 21:44:44.674653] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:44.052 [2024-12-10 21:44:44.674700] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:44.052 [2024-12-10 21:44:44.674722] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:44.052 [2024-12-10 21:44:44.674733] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:44.052 [2024-12-10 21:44:44.674739] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:44.052 [2024-12-10 21:44:44.674748] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:44.052 [2024-12-10 21:44:44.674754] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:44.052 [2024-12-10 21:44:44.674762] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:44.052 21:44:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.052 21:44:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:44.052 21:44:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:44.052 21:44:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:44.052 21:44:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:44.052 21:44:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:44.052 21:44:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:44.052 21:44:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:44.052 21:44:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:44.052 21:44:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:44.052 21:44:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:44.052 21:44:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:44.052 21:44:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:44.053 21:44:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.053 21:44:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:44.053 21:44:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.053 21:44:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:44.053 "name": "Existed_Raid", 00:17:44.053 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:44.053 "strip_size_kb": 64, 00:17:44.053 "state": "configuring", 00:17:44.053 "raid_level": "raid5f", 00:17:44.053 "superblock": false, 00:17:44.053 "num_base_bdevs": 4, 00:17:44.053 "num_base_bdevs_discovered": 0, 00:17:44.053 "num_base_bdevs_operational": 4, 00:17:44.053 "base_bdevs_list": [ 00:17:44.053 { 00:17:44.053 "name": "BaseBdev1", 00:17:44.053 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:44.053 "is_configured": false, 00:17:44.053 "data_offset": 0, 00:17:44.053 "data_size": 0 00:17:44.053 }, 00:17:44.053 { 00:17:44.053 "name": "BaseBdev2", 00:17:44.053 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:44.053 "is_configured": false, 00:17:44.053 "data_offset": 0, 00:17:44.053 "data_size": 0 00:17:44.053 }, 00:17:44.053 { 00:17:44.053 "name": "BaseBdev3", 00:17:44.053 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:44.053 "is_configured": false, 00:17:44.053 "data_offset": 0, 00:17:44.053 "data_size": 0 00:17:44.053 }, 00:17:44.053 { 00:17:44.053 "name": "BaseBdev4", 00:17:44.053 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:44.053 "is_configured": false, 00:17:44.053 "data_offset": 0, 00:17:44.053 "data_size": 0 00:17:44.053 } 00:17:44.053 ] 00:17:44.053 }' 00:17:44.053 21:44:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:44.053 21:44:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:44.620 21:44:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:44.620 21:44:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.620 21:44:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:44.620 [2024-12-10 21:44:45.105889] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:44.620 [2024-12-10 21:44:45.105931] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:17:44.620 21:44:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.620 21:44:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:17:44.620 21:44:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.620 21:44:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:44.620 [2024-12-10 21:44:45.117862] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:44.620 [2024-12-10 21:44:45.117898] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:44.620 [2024-12-10 21:44:45.117922] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:44.620 [2024-12-10 21:44:45.117932] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:44.620 [2024-12-10 21:44:45.117938] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:44.620 [2024-12-10 21:44:45.117947] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:44.620 [2024-12-10 21:44:45.117953] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:44.620 [2024-12-10 21:44:45.117962] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:44.620 21:44:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.620 21:44:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:17:44.620 21:44:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.620 21:44:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:44.620 [2024-12-10 21:44:45.165107] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:44.620 BaseBdev1 00:17:44.620 21:44:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.620 21:44:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:17:44.620 21:44:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:17:44.620 21:44:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:44.620 21:44:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:17:44.620 21:44:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:44.620 21:44:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:44.620 21:44:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:44.620 21:44:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.620 21:44:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:44.620 21:44:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.620 21:44:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:44.620 21:44:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.620 21:44:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:44.620 [ 00:17:44.620 { 00:17:44.620 "name": "BaseBdev1", 00:17:44.620 "aliases": [ 00:17:44.620 "4ca5061f-c82f-4971-92b6-bbb14e000061" 00:17:44.620 ], 00:17:44.620 "product_name": "Malloc disk", 00:17:44.620 "block_size": 512, 00:17:44.620 "num_blocks": 65536, 00:17:44.620 "uuid": "4ca5061f-c82f-4971-92b6-bbb14e000061", 00:17:44.620 "assigned_rate_limits": { 00:17:44.620 "rw_ios_per_sec": 0, 00:17:44.620 "rw_mbytes_per_sec": 0, 00:17:44.620 "r_mbytes_per_sec": 0, 00:17:44.620 "w_mbytes_per_sec": 0 00:17:44.620 }, 00:17:44.620 "claimed": true, 00:17:44.620 "claim_type": "exclusive_write", 00:17:44.620 "zoned": false, 00:17:44.620 "supported_io_types": { 00:17:44.620 "read": true, 00:17:44.620 "write": true, 00:17:44.620 "unmap": true, 00:17:44.620 "flush": true, 00:17:44.620 "reset": true, 00:17:44.620 "nvme_admin": false, 00:17:44.620 "nvme_io": false, 00:17:44.620 "nvme_io_md": false, 00:17:44.620 "write_zeroes": true, 00:17:44.620 "zcopy": true, 00:17:44.620 "get_zone_info": false, 00:17:44.620 "zone_management": false, 00:17:44.620 "zone_append": false, 00:17:44.620 "compare": false, 00:17:44.620 "compare_and_write": false, 00:17:44.620 "abort": true, 00:17:44.620 "seek_hole": false, 00:17:44.620 "seek_data": false, 00:17:44.620 "copy": true, 00:17:44.620 "nvme_iov_md": false 00:17:44.620 }, 00:17:44.620 "memory_domains": [ 00:17:44.620 { 00:17:44.620 "dma_device_id": "system", 00:17:44.620 "dma_device_type": 1 00:17:44.620 }, 00:17:44.620 { 00:17:44.620 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:44.620 "dma_device_type": 2 00:17:44.620 } 00:17:44.620 ], 00:17:44.620 "driver_specific": {} 00:17:44.620 } 00:17:44.620 ] 00:17:44.620 21:44:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.621 21:44:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:17:44.621 21:44:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:44.621 21:44:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:44.621 21:44:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:44.621 21:44:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:44.621 21:44:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:44.621 21:44:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:44.621 21:44:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:44.621 21:44:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:44.621 21:44:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:44.621 21:44:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:44.621 21:44:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:44.621 21:44:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.621 21:44:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:44.621 21:44:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:44.621 21:44:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.621 21:44:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:44.621 "name": "Existed_Raid", 00:17:44.621 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:44.621 "strip_size_kb": 64, 00:17:44.621 "state": "configuring", 00:17:44.621 "raid_level": "raid5f", 00:17:44.621 "superblock": false, 00:17:44.621 "num_base_bdevs": 4, 00:17:44.621 "num_base_bdevs_discovered": 1, 00:17:44.621 "num_base_bdevs_operational": 4, 00:17:44.621 "base_bdevs_list": [ 00:17:44.621 { 00:17:44.621 "name": "BaseBdev1", 00:17:44.621 "uuid": "4ca5061f-c82f-4971-92b6-bbb14e000061", 00:17:44.621 "is_configured": true, 00:17:44.621 "data_offset": 0, 00:17:44.621 "data_size": 65536 00:17:44.621 }, 00:17:44.621 { 00:17:44.621 "name": "BaseBdev2", 00:17:44.621 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:44.621 "is_configured": false, 00:17:44.621 "data_offset": 0, 00:17:44.621 "data_size": 0 00:17:44.621 }, 00:17:44.621 { 00:17:44.621 "name": "BaseBdev3", 00:17:44.621 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:44.621 "is_configured": false, 00:17:44.621 "data_offset": 0, 00:17:44.621 "data_size": 0 00:17:44.621 }, 00:17:44.621 { 00:17:44.621 "name": "BaseBdev4", 00:17:44.621 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:44.621 "is_configured": false, 00:17:44.621 "data_offset": 0, 00:17:44.621 "data_size": 0 00:17:44.621 } 00:17:44.621 ] 00:17:44.621 }' 00:17:44.621 21:44:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:44.621 21:44:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:45.189 21:44:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:45.189 21:44:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.189 21:44:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:45.189 [2024-12-10 21:44:45.668315] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:45.189 [2024-12-10 21:44:45.668377] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:17:45.189 21:44:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.189 21:44:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:17:45.189 21:44:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.189 21:44:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:45.189 [2024-12-10 21:44:45.680334] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:45.189 [2024-12-10 21:44:45.682154] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:45.189 [2024-12-10 21:44:45.682192] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:45.189 [2024-12-10 21:44:45.682218] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:45.189 [2024-12-10 21:44:45.682228] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:45.189 [2024-12-10 21:44:45.682235] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:45.189 [2024-12-10 21:44:45.682243] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:45.189 21:44:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.189 21:44:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:17:45.189 21:44:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:45.189 21:44:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:45.189 21:44:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:45.189 21:44:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:45.189 21:44:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:45.189 21:44:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:45.189 21:44:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:45.189 21:44:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:45.189 21:44:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:45.189 21:44:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:45.189 21:44:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:45.189 21:44:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:45.189 21:44:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.189 21:44:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:45.189 21:44:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:45.189 21:44:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.189 21:44:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:45.189 "name": "Existed_Raid", 00:17:45.189 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:45.189 "strip_size_kb": 64, 00:17:45.189 "state": "configuring", 00:17:45.189 "raid_level": "raid5f", 00:17:45.189 "superblock": false, 00:17:45.189 "num_base_bdevs": 4, 00:17:45.189 "num_base_bdevs_discovered": 1, 00:17:45.189 "num_base_bdevs_operational": 4, 00:17:45.189 "base_bdevs_list": [ 00:17:45.189 { 00:17:45.189 "name": "BaseBdev1", 00:17:45.189 "uuid": "4ca5061f-c82f-4971-92b6-bbb14e000061", 00:17:45.189 "is_configured": true, 00:17:45.189 "data_offset": 0, 00:17:45.189 "data_size": 65536 00:17:45.189 }, 00:17:45.189 { 00:17:45.189 "name": "BaseBdev2", 00:17:45.189 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:45.189 "is_configured": false, 00:17:45.189 "data_offset": 0, 00:17:45.189 "data_size": 0 00:17:45.189 }, 00:17:45.189 { 00:17:45.189 "name": "BaseBdev3", 00:17:45.189 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:45.189 "is_configured": false, 00:17:45.189 "data_offset": 0, 00:17:45.189 "data_size": 0 00:17:45.189 }, 00:17:45.189 { 00:17:45.190 "name": "BaseBdev4", 00:17:45.190 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:45.190 "is_configured": false, 00:17:45.190 "data_offset": 0, 00:17:45.190 "data_size": 0 00:17:45.190 } 00:17:45.190 ] 00:17:45.190 }' 00:17:45.190 21:44:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:45.190 21:44:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:45.449 21:44:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:17:45.449 21:44:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.449 21:44:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:45.449 [2024-12-10 21:44:46.152131] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:45.449 BaseBdev2 00:17:45.449 21:44:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.449 21:44:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:17:45.449 21:44:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:17:45.449 21:44:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:45.449 21:44:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:17:45.449 21:44:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:45.449 21:44:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:45.449 21:44:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:45.449 21:44:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.449 21:44:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:45.449 21:44:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.449 21:44:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:45.449 21:44:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.449 21:44:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:45.449 [ 00:17:45.449 { 00:17:45.449 "name": "BaseBdev2", 00:17:45.449 "aliases": [ 00:17:45.449 "7e74a616-8b71-4ec8-aa7e-28db6d166c09" 00:17:45.449 ], 00:17:45.449 "product_name": "Malloc disk", 00:17:45.449 "block_size": 512, 00:17:45.449 "num_blocks": 65536, 00:17:45.449 "uuid": "7e74a616-8b71-4ec8-aa7e-28db6d166c09", 00:17:45.449 "assigned_rate_limits": { 00:17:45.449 "rw_ios_per_sec": 0, 00:17:45.449 "rw_mbytes_per_sec": 0, 00:17:45.449 "r_mbytes_per_sec": 0, 00:17:45.449 "w_mbytes_per_sec": 0 00:17:45.449 }, 00:17:45.449 "claimed": true, 00:17:45.449 "claim_type": "exclusive_write", 00:17:45.449 "zoned": false, 00:17:45.449 "supported_io_types": { 00:17:45.449 "read": true, 00:17:45.449 "write": true, 00:17:45.449 "unmap": true, 00:17:45.449 "flush": true, 00:17:45.449 "reset": true, 00:17:45.449 "nvme_admin": false, 00:17:45.449 "nvme_io": false, 00:17:45.449 "nvme_io_md": false, 00:17:45.449 "write_zeroes": true, 00:17:45.449 "zcopy": true, 00:17:45.449 "get_zone_info": false, 00:17:45.449 "zone_management": false, 00:17:45.449 "zone_append": false, 00:17:45.449 "compare": false, 00:17:45.449 "compare_and_write": false, 00:17:45.449 "abort": true, 00:17:45.449 "seek_hole": false, 00:17:45.449 "seek_data": false, 00:17:45.449 "copy": true, 00:17:45.449 "nvme_iov_md": false 00:17:45.449 }, 00:17:45.449 "memory_domains": [ 00:17:45.449 { 00:17:45.449 "dma_device_id": "system", 00:17:45.449 "dma_device_type": 1 00:17:45.449 }, 00:17:45.449 { 00:17:45.449 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:45.449 "dma_device_type": 2 00:17:45.449 } 00:17:45.449 ], 00:17:45.449 "driver_specific": {} 00:17:45.449 } 00:17:45.449 ] 00:17:45.449 21:44:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.449 21:44:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:17:45.449 21:44:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:45.449 21:44:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:45.449 21:44:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:45.449 21:44:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:45.449 21:44:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:45.449 21:44:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:45.449 21:44:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:45.449 21:44:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:45.449 21:44:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:45.449 21:44:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:45.449 21:44:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:45.449 21:44:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:45.449 21:44:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:45.449 21:44:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:45.449 21:44:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.449 21:44:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:45.449 21:44:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.707 21:44:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:45.707 "name": "Existed_Raid", 00:17:45.707 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:45.707 "strip_size_kb": 64, 00:17:45.707 "state": "configuring", 00:17:45.707 "raid_level": "raid5f", 00:17:45.707 "superblock": false, 00:17:45.707 "num_base_bdevs": 4, 00:17:45.708 "num_base_bdevs_discovered": 2, 00:17:45.708 "num_base_bdevs_operational": 4, 00:17:45.708 "base_bdevs_list": [ 00:17:45.708 { 00:17:45.708 "name": "BaseBdev1", 00:17:45.708 "uuid": "4ca5061f-c82f-4971-92b6-bbb14e000061", 00:17:45.708 "is_configured": true, 00:17:45.708 "data_offset": 0, 00:17:45.708 "data_size": 65536 00:17:45.708 }, 00:17:45.708 { 00:17:45.708 "name": "BaseBdev2", 00:17:45.708 "uuid": "7e74a616-8b71-4ec8-aa7e-28db6d166c09", 00:17:45.708 "is_configured": true, 00:17:45.708 "data_offset": 0, 00:17:45.708 "data_size": 65536 00:17:45.708 }, 00:17:45.708 { 00:17:45.708 "name": "BaseBdev3", 00:17:45.708 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:45.708 "is_configured": false, 00:17:45.708 "data_offset": 0, 00:17:45.708 "data_size": 0 00:17:45.708 }, 00:17:45.708 { 00:17:45.708 "name": "BaseBdev4", 00:17:45.708 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:45.708 "is_configured": false, 00:17:45.708 "data_offset": 0, 00:17:45.708 "data_size": 0 00:17:45.708 } 00:17:45.708 ] 00:17:45.708 }' 00:17:45.708 21:44:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:45.708 21:44:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:45.966 21:44:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:17:45.966 21:44:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.966 21:44:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:45.966 [2024-12-10 21:44:46.713987] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:45.966 BaseBdev3 00:17:45.966 21:44:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.966 21:44:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:17:45.966 21:44:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:17:45.966 21:44:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:45.966 21:44:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:17:45.966 21:44:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:45.966 21:44:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:45.966 21:44:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:45.966 21:44:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.966 21:44:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:45.966 21:44:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.966 21:44:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:45.966 21:44:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.966 21:44:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:45.966 [ 00:17:45.966 { 00:17:45.966 "name": "BaseBdev3", 00:17:45.966 "aliases": [ 00:17:45.966 "112f5129-3eb8-4438-a802-60eb9f1a54f1" 00:17:45.966 ], 00:17:45.966 "product_name": "Malloc disk", 00:17:45.966 "block_size": 512, 00:17:45.966 "num_blocks": 65536, 00:17:45.966 "uuid": "112f5129-3eb8-4438-a802-60eb9f1a54f1", 00:17:45.966 "assigned_rate_limits": { 00:17:45.966 "rw_ios_per_sec": 0, 00:17:45.966 "rw_mbytes_per_sec": 0, 00:17:45.966 "r_mbytes_per_sec": 0, 00:17:45.966 "w_mbytes_per_sec": 0 00:17:45.966 }, 00:17:45.966 "claimed": true, 00:17:45.966 "claim_type": "exclusive_write", 00:17:45.966 "zoned": false, 00:17:45.966 "supported_io_types": { 00:17:45.966 "read": true, 00:17:45.966 "write": true, 00:17:45.966 "unmap": true, 00:17:45.966 "flush": true, 00:17:45.966 "reset": true, 00:17:45.966 "nvme_admin": false, 00:17:45.966 "nvme_io": false, 00:17:45.966 "nvme_io_md": false, 00:17:45.966 "write_zeroes": true, 00:17:45.966 "zcopy": true, 00:17:45.966 "get_zone_info": false, 00:17:45.966 "zone_management": false, 00:17:45.966 "zone_append": false, 00:17:45.966 "compare": false, 00:17:45.966 "compare_and_write": false, 00:17:45.966 "abort": true, 00:17:45.966 "seek_hole": false, 00:17:45.966 "seek_data": false, 00:17:45.966 "copy": true, 00:17:45.966 "nvme_iov_md": false 00:17:45.966 }, 00:17:45.966 "memory_domains": [ 00:17:45.966 { 00:17:45.966 "dma_device_id": "system", 00:17:45.966 "dma_device_type": 1 00:17:45.966 }, 00:17:46.225 { 00:17:46.225 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:46.225 "dma_device_type": 2 00:17:46.225 } 00:17:46.225 ], 00:17:46.225 "driver_specific": {} 00:17:46.225 } 00:17:46.225 ] 00:17:46.225 21:44:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.225 21:44:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:17:46.225 21:44:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:46.225 21:44:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:46.225 21:44:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:46.225 21:44:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:46.225 21:44:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:46.225 21:44:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:46.225 21:44:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:46.225 21:44:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:46.225 21:44:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:46.225 21:44:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:46.225 21:44:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:46.225 21:44:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:46.225 21:44:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:46.226 21:44:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:46.226 21:44:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.226 21:44:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:46.226 21:44:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.226 21:44:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:46.226 "name": "Existed_Raid", 00:17:46.226 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:46.226 "strip_size_kb": 64, 00:17:46.226 "state": "configuring", 00:17:46.226 "raid_level": "raid5f", 00:17:46.226 "superblock": false, 00:17:46.226 "num_base_bdevs": 4, 00:17:46.226 "num_base_bdevs_discovered": 3, 00:17:46.226 "num_base_bdevs_operational": 4, 00:17:46.226 "base_bdevs_list": [ 00:17:46.226 { 00:17:46.226 "name": "BaseBdev1", 00:17:46.226 "uuid": "4ca5061f-c82f-4971-92b6-bbb14e000061", 00:17:46.226 "is_configured": true, 00:17:46.226 "data_offset": 0, 00:17:46.226 "data_size": 65536 00:17:46.226 }, 00:17:46.226 { 00:17:46.226 "name": "BaseBdev2", 00:17:46.226 "uuid": "7e74a616-8b71-4ec8-aa7e-28db6d166c09", 00:17:46.226 "is_configured": true, 00:17:46.226 "data_offset": 0, 00:17:46.226 "data_size": 65536 00:17:46.226 }, 00:17:46.226 { 00:17:46.226 "name": "BaseBdev3", 00:17:46.226 "uuid": "112f5129-3eb8-4438-a802-60eb9f1a54f1", 00:17:46.226 "is_configured": true, 00:17:46.226 "data_offset": 0, 00:17:46.226 "data_size": 65536 00:17:46.226 }, 00:17:46.226 { 00:17:46.226 "name": "BaseBdev4", 00:17:46.226 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:46.226 "is_configured": false, 00:17:46.226 "data_offset": 0, 00:17:46.226 "data_size": 0 00:17:46.226 } 00:17:46.226 ] 00:17:46.226 }' 00:17:46.226 21:44:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:46.226 21:44:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:46.485 21:44:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:17:46.485 21:44:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.485 21:44:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:46.745 [2024-12-10 21:44:47.267530] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:46.745 [2024-12-10 21:44:47.267596] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:46.745 [2024-12-10 21:44:47.267606] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:17:46.745 [2024-12-10 21:44:47.267862] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:17:46.745 [2024-12-10 21:44:47.275388] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:46.745 [2024-12-10 21:44:47.275424] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:17:46.745 [2024-12-10 21:44:47.275704] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:46.745 BaseBdev4 00:17:46.745 21:44:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.745 21:44:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:17:46.745 21:44:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:17:46.745 21:44:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:46.745 21:44:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:17:46.745 21:44:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:46.745 21:44:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:46.745 21:44:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:46.745 21:44:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.745 21:44:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:46.745 21:44:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.745 21:44:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:17:46.745 21:44:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.745 21:44:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:46.745 [ 00:17:46.745 { 00:17:46.745 "name": "BaseBdev4", 00:17:46.745 "aliases": [ 00:17:46.745 "9a90bd95-432c-4bda-b8ed-0d125fa8ab06" 00:17:46.745 ], 00:17:46.745 "product_name": "Malloc disk", 00:17:46.745 "block_size": 512, 00:17:46.745 "num_blocks": 65536, 00:17:46.745 "uuid": "9a90bd95-432c-4bda-b8ed-0d125fa8ab06", 00:17:46.745 "assigned_rate_limits": { 00:17:46.745 "rw_ios_per_sec": 0, 00:17:46.745 "rw_mbytes_per_sec": 0, 00:17:46.745 "r_mbytes_per_sec": 0, 00:17:46.745 "w_mbytes_per_sec": 0 00:17:46.745 }, 00:17:46.745 "claimed": true, 00:17:46.745 "claim_type": "exclusive_write", 00:17:46.745 "zoned": false, 00:17:46.745 "supported_io_types": { 00:17:46.745 "read": true, 00:17:46.745 "write": true, 00:17:46.745 "unmap": true, 00:17:46.745 "flush": true, 00:17:46.745 "reset": true, 00:17:46.745 "nvme_admin": false, 00:17:46.745 "nvme_io": false, 00:17:46.745 "nvme_io_md": false, 00:17:46.745 "write_zeroes": true, 00:17:46.745 "zcopy": true, 00:17:46.745 "get_zone_info": false, 00:17:46.745 "zone_management": false, 00:17:46.745 "zone_append": false, 00:17:46.745 "compare": false, 00:17:46.745 "compare_and_write": false, 00:17:46.745 "abort": true, 00:17:46.745 "seek_hole": false, 00:17:46.745 "seek_data": false, 00:17:46.745 "copy": true, 00:17:46.745 "nvme_iov_md": false 00:17:46.745 }, 00:17:46.745 "memory_domains": [ 00:17:46.745 { 00:17:46.745 "dma_device_id": "system", 00:17:46.745 "dma_device_type": 1 00:17:46.745 }, 00:17:46.745 { 00:17:46.745 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:46.745 "dma_device_type": 2 00:17:46.745 } 00:17:46.745 ], 00:17:46.745 "driver_specific": {} 00:17:46.745 } 00:17:46.745 ] 00:17:46.745 21:44:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.745 21:44:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:17:46.745 21:44:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:46.745 21:44:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:46.745 21:44:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:17:46.745 21:44:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:46.745 21:44:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:46.745 21:44:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:46.745 21:44:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:46.745 21:44:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:46.745 21:44:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:46.745 21:44:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:46.745 21:44:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:46.745 21:44:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:46.745 21:44:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:46.745 21:44:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.745 21:44:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:46.745 21:44:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:46.745 21:44:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.745 21:44:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:46.745 "name": "Existed_Raid", 00:17:46.745 "uuid": "70d1914f-318e-4b0a-b22a-f453ac6c5693", 00:17:46.745 "strip_size_kb": 64, 00:17:46.746 "state": "online", 00:17:46.746 "raid_level": "raid5f", 00:17:46.746 "superblock": false, 00:17:46.746 "num_base_bdevs": 4, 00:17:46.746 "num_base_bdevs_discovered": 4, 00:17:46.746 "num_base_bdevs_operational": 4, 00:17:46.746 "base_bdevs_list": [ 00:17:46.746 { 00:17:46.746 "name": "BaseBdev1", 00:17:46.746 "uuid": "4ca5061f-c82f-4971-92b6-bbb14e000061", 00:17:46.746 "is_configured": true, 00:17:46.746 "data_offset": 0, 00:17:46.746 "data_size": 65536 00:17:46.746 }, 00:17:46.746 { 00:17:46.746 "name": "BaseBdev2", 00:17:46.746 "uuid": "7e74a616-8b71-4ec8-aa7e-28db6d166c09", 00:17:46.746 "is_configured": true, 00:17:46.746 "data_offset": 0, 00:17:46.746 "data_size": 65536 00:17:46.746 }, 00:17:46.746 { 00:17:46.746 "name": "BaseBdev3", 00:17:46.746 "uuid": "112f5129-3eb8-4438-a802-60eb9f1a54f1", 00:17:46.746 "is_configured": true, 00:17:46.746 "data_offset": 0, 00:17:46.746 "data_size": 65536 00:17:46.746 }, 00:17:46.746 { 00:17:46.746 "name": "BaseBdev4", 00:17:46.746 "uuid": "9a90bd95-432c-4bda-b8ed-0d125fa8ab06", 00:17:46.746 "is_configured": true, 00:17:46.746 "data_offset": 0, 00:17:46.746 "data_size": 65536 00:17:46.746 } 00:17:46.746 ] 00:17:46.746 }' 00:17:46.746 21:44:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:46.746 21:44:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:47.005 21:44:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:17:47.005 21:44:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:17:47.005 21:44:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:47.005 21:44:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:47.005 21:44:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:17:47.005 21:44:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:47.005 21:44:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:17:47.005 21:44:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:47.005 21:44:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.005 21:44:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:47.263 [2024-12-10 21:44:47.787983] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:47.263 21:44:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.263 21:44:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:47.263 "name": "Existed_Raid", 00:17:47.263 "aliases": [ 00:17:47.263 "70d1914f-318e-4b0a-b22a-f453ac6c5693" 00:17:47.263 ], 00:17:47.263 "product_name": "Raid Volume", 00:17:47.263 "block_size": 512, 00:17:47.263 "num_blocks": 196608, 00:17:47.263 "uuid": "70d1914f-318e-4b0a-b22a-f453ac6c5693", 00:17:47.263 "assigned_rate_limits": { 00:17:47.263 "rw_ios_per_sec": 0, 00:17:47.263 "rw_mbytes_per_sec": 0, 00:17:47.263 "r_mbytes_per_sec": 0, 00:17:47.263 "w_mbytes_per_sec": 0 00:17:47.263 }, 00:17:47.263 "claimed": false, 00:17:47.263 "zoned": false, 00:17:47.263 "supported_io_types": { 00:17:47.263 "read": true, 00:17:47.263 "write": true, 00:17:47.263 "unmap": false, 00:17:47.263 "flush": false, 00:17:47.263 "reset": true, 00:17:47.263 "nvme_admin": false, 00:17:47.263 "nvme_io": false, 00:17:47.263 "nvme_io_md": false, 00:17:47.263 "write_zeroes": true, 00:17:47.263 "zcopy": false, 00:17:47.263 "get_zone_info": false, 00:17:47.264 "zone_management": false, 00:17:47.264 "zone_append": false, 00:17:47.264 "compare": false, 00:17:47.264 "compare_and_write": false, 00:17:47.264 "abort": false, 00:17:47.264 "seek_hole": false, 00:17:47.264 "seek_data": false, 00:17:47.264 "copy": false, 00:17:47.264 "nvme_iov_md": false 00:17:47.264 }, 00:17:47.264 "driver_specific": { 00:17:47.264 "raid": { 00:17:47.264 "uuid": "70d1914f-318e-4b0a-b22a-f453ac6c5693", 00:17:47.264 "strip_size_kb": 64, 00:17:47.264 "state": "online", 00:17:47.264 "raid_level": "raid5f", 00:17:47.264 "superblock": false, 00:17:47.264 "num_base_bdevs": 4, 00:17:47.264 "num_base_bdevs_discovered": 4, 00:17:47.264 "num_base_bdevs_operational": 4, 00:17:47.264 "base_bdevs_list": [ 00:17:47.264 { 00:17:47.264 "name": "BaseBdev1", 00:17:47.264 "uuid": "4ca5061f-c82f-4971-92b6-bbb14e000061", 00:17:47.264 "is_configured": true, 00:17:47.264 "data_offset": 0, 00:17:47.264 "data_size": 65536 00:17:47.264 }, 00:17:47.264 { 00:17:47.264 "name": "BaseBdev2", 00:17:47.264 "uuid": "7e74a616-8b71-4ec8-aa7e-28db6d166c09", 00:17:47.264 "is_configured": true, 00:17:47.264 "data_offset": 0, 00:17:47.264 "data_size": 65536 00:17:47.264 }, 00:17:47.264 { 00:17:47.264 "name": "BaseBdev3", 00:17:47.264 "uuid": "112f5129-3eb8-4438-a802-60eb9f1a54f1", 00:17:47.264 "is_configured": true, 00:17:47.264 "data_offset": 0, 00:17:47.264 "data_size": 65536 00:17:47.264 }, 00:17:47.264 { 00:17:47.264 "name": "BaseBdev4", 00:17:47.264 "uuid": "9a90bd95-432c-4bda-b8ed-0d125fa8ab06", 00:17:47.264 "is_configured": true, 00:17:47.264 "data_offset": 0, 00:17:47.264 "data_size": 65536 00:17:47.264 } 00:17:47.264 ] 00:17:47.264 } 00:17:47.264 } 00:17:47.264 }' 00:17:47.264 21:44:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:47.264 21:44:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:17:47.264 BaseBdev2 00:17:47.264 BaseBdev3 00:17:47.264 BaseBdev4' 00:17:47.264 21:44:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:47.264 21:44:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:17:47.264 21:44:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:47.264 21:44:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:17:47.264 21:44:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.264 21:44:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:47.264 21:44:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:47.264 21:44:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.264 21:44:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:47.264 21:44:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:47.264 21:44:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:47.264 21:44:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:17:47.264 21:44:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:47.264 21:44:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.264 21:44:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:47.264 21:44:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.264 21:44:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:47.264 21:44:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:47.264 21:44:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:47.264 21:44:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:17:47.264 21:44:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:47.264 21:44:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.264 21:44:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:47.522 21:44:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.522 21:44:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:47.522 21:44:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:47.522 21:44:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:47.522 21:44:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:17:47.522 21:44:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.522 21:44:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:47.522 21:44:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:47.522 21:44:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.522 21:44:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:47.522 21:44:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:47.522 21:44:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:17:47.522 21:44:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.522 21:44:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:47.522 [2024-12-10 21:44:48.135205] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:47.522 21:44:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.522 21:44:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:17:47.522 21:44:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:17:47.522 21:44:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:47.522 21:44:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:17:47.522 21:44:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:17:47.523 21:44:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:17:47.523 21:44:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:47.523 21:44:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:47.523 21:44:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:47.523 21:44:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:47.523 21:44:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:47.523 21:44:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:47.523 21:44:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:47.523 21:44:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:47.523 21:44:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:47.523 21:44:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:47.523 21:44:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:47.523 21:44:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.523 21:44:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:47.523 21:44:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.523 21:44:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:47.523 "name": "Existed_Raid", 00:17:47.523 "uuid": "70d1914f-318e-4b0a-b22a-f453ac6c5693", 00:17:47.523 "strip_size_kb": 64, 00:17:47.523 "state": "online", 00:17:47.523 "raid_level": "raid5f", 00:17:47.523 "superblock": false, 00:17:47.523 "num_base_bdevs": 4, 00:17:47.523 "num_base_bdevs_discovered": 3, 00:17:47.523 "num_base_bdevs_operational": 3, 00:17:47.523 "base_bdevs_list": [ 00:17:47.523 { 00:17:47.523 "name": null, 00:17:47.523 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:47.523 "is_configured": false, 00:17:47.523 "data_offset": 0, 00:17:47.523 "data_size": 65536 00:17:47.523 }, 00:17:47.523 { 00:17:47.523 "name": "BaseBdev2", 00:17:47.523 "uuid": "7e74a616-8b71-4ec8-aa7e-28db6d166c09", 00:17:47.523 "is_configured": true, 00:17:47.523 "data_offset": 0, 00:17:47.523 "data_size": 65536 00:17:47.523 }, 00:17:47.523 { 00:17:47.523 "name": "BaseBdev3", 00:17:47.523 "uuid": "112f5129-3eb8-4438-a802-60eb9f1a54f1", 00:17:47.523 "is_configured": true, 00:17:47.523 "data_offset": 0, 00:17:47.523 "data_size": 65536 00:17:47.523 }, 00:17:47.523 { 00:17:47.523 "name": "BaseBdev4", 00:17:47.523 "uuid": "9a90bd95-432c-4bda-b8ed-0d125fa8ab06", 00:17:47.523 "is_configured": true, 00:17:47.523 "data_offset": 0, 00:17:47.523 "data_size": 65536 00:17:47.523 } 00:17:47.523 ] 00:17:47.523 }' 00:17:47.523 21:44:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:47.523 21:44:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:48.090 21:44:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:17:48.090 21:44:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:48.090 21:44:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:48.090 21:44:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:48.090 21:44:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.090 21:44:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:48.090 21:44:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.090 21:44:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:48.090 21:44:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:48.090 21:44:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:17:48.090 21:44:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.090 21:44:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:48.090 [2024-12-10 21:44:48.732073] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:48.090 [2024-12-10 21:44:48.732175] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:48.091 [2024-12-10 21:44:48.829038] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:48.091 21:44:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.091 21:44:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:48.091 21:44:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:48.091 21:44:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:48.091 21:44:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:48.091 21:44:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.091 21:44:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:48.091 21:44:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.350 21:44:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:48.350 21:44:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:48.350 21:44:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:17:48.350 21:44:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.350 21:44:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:48.350 [2024-12-10 21:44:48.888985] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:48.350 21:44:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.350 21:44:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:48.350 21:44:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:48.350 21:44:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:48.350 21:44:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.350 21:44:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:48.350 21:44:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:48.350 21:44:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.350 21:44:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:48.350 21:44:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:48.350 21:44:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:17:48.350 21:44:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.350 21:44:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:48.350 [2024-12-10 21:44:49.049720] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:17:48.350 [2024-12-10 21:44:49.049782] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:17:48.699 21:44:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.699 21:44:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:48.699 21:44:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:48.699 21:44:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:48.699 21:44:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.699 21:44:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:17:48.699 21:44:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:48.699 21:44:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.699 21:44:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:17:48.699 21:44:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:17:48.699 21:44:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:17:48.699 21:44:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:17:48.699 21:44:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:48.699 21:44:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:17:48.699 21:44:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.699 21:44:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:48.699 BaseBdev2 00:17:48.699 21:44:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.699 21:44:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:17:48.699 21:44:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:17:48.699 21:44:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:48.699 21:44:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:17:48.699 21:44:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:48.699 21:44:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:48.699 21:44:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:48.699 21:44:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.699 21:44:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:48.699 21:44:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.699 21:44:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:48.699 21:44:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.699 21:44:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:48.699 [ 00:17:48.699 { 00:17:48.699 "name": "BaseBdev2", 00:17:48.699 "aliases": [ 00:17:48.699 "5fee2b73-b798-4d76-b18d-6983240cfe7e" 00:17:48.699 ], 00:17:48.699 "product_name": "Malloc disk", 00:17:48.699 "block_size": 512, 00:17:48.699 "num_blocks": 65536, 00:17:48.699 "uuid": "5fee2b73-b798-4d76-b18d-6983240cfe7e", 00:17:48.699 "assigned_rate_limits": { 00:17:48.699 "rw_ios_per_sec": 0, 00:17:48.699 "rw_mbytes_per_sec": 0, 00:17:48.699 "r_mbytes_per_sec": 0, 00:17:48.699 "w_mbytes_per_sec": 0 00:17:48.699 }, 00:17:48.699 "claimed": false, 00:17:48.699 "zoned": false, 00:17:48.699 "supported_io_types": { 00:17:48.699 "read": true, 00:17:48.699 "write": true, 00:17:48.699 "unmap": true, 00:17:48.699 "flush": true, 00:17:48.699 "reset": true, 00:17:48.699 "nvme_admin": false, 00:17:48.699 "nvme_io": false, 00:17:48.699 "nvme_io_md": false, 00:17:48.699 "write_zeroes": true, 00:17:48.699 "zcopy": true, 00:17:48.699 "get_zone_info": false, 00:17:48.699 "zone_management": false, 00:17:48.699 "zone_append": false, 00:17:48.699 "compare": false, 00:17:48.699 "compare_and_write": false, 00:17:48.699 "abort": true, 00:17:48.699 "seek_hole": false, 00:17:48.699 "seek_data": false, 00:17:48.699 "copy": true, 00:17:48.699 "nvme_iov_md": false 00:17:48.699 }, 00:17:48.699 "memory_domains": [ 00:17:48.699 { 00:17:48.699 "dma_device_id": "system", 00:17:48.699 "dma_device_type": 1 00:17:48.699 }, 00:17:48.699 { 00:17:48.699 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:48.699 "dma_device_type": 2 00:17:48.699 } 00:17:48.699 ], 00:17:48.699 "driver_specific": {} 00:17:48.699 } 00:17:48.699 ] 00:17:48.699 21:44:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.699 21:44:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:17:48.699 21:44:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:17:48.699 21:44:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:48.699 21:44:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:17:48.699 21:44:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.699 21:44:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:48.699 BaseBdev3 00:17:48.699 21:44:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.699 21:44:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:17:48.699 21:44:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:17:48.699 21:44:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:48.699 21:44:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:17:48.699 21:44:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:48.699 21:44:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:48.699 21:44:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:48.699 21:44:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.699 21:44:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:48.699 21:44:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.699 21:44:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:48.699 21:44:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.699 21:44:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:48.699 [ 00:17:48.699 { 00:17:48.699 "name": "BaseBdev3", 00:17:48.699 "aliases": [ 00:17:48.699 "6f9e5977-3914-4937-91dd-47c6fcdc3480" 00:17:48.699 ], 00:17:48.699 "product_name": "Malloc disk", 00:17:48.699 "block_size": 512, 00:17:48.699 "num_blocks": 65536, 00:17:48.699 "uuid": "6f9e5977-3914-4937-91dd-47c6fcdc3480", 00:17:48.699 "assigned_rate_limits": { 00:17:48.699 "rw_ios_per_sec": 0, 00:17:48.699 "rw_mbytes_per_sec": 0, 00:17:48.699 "r_mbytes_per_sec": 0, 00:17:48.699 "w_mbytes_per_sec": 0 00:17:48.699 }, 00:17:48.699 "claimed": false, 00:17:48.699 "zoned": false, 00:17:48.699 "supported_io_types": { 00:17:48.699 "read": true, 00:17:48.699 "write": true, 00:17:48.699 "unmap": true, 00:17:48.699 "flush": true, 00:17:48.699 "reset": true, 00:17:48.699 "nvme_admin": false, 00:17:48.699 "nvme_io": false, 00:17:48.699 "nvme_io_md": false, 00:17:48.699 "write_zeroes": true, 00:17:48.699 "zcopy": true, 00:17:48.699 "get_zone_info": false, 00:17:48.699 "zone_management": false, 00:17:48.699 "zone_append": false, 00:17:48.699 "compare": false, 00:17:48.699 "compare_and_write": false, 00:17:48.699 "abort": true, 00:17:48.699 "seek_hole": false, 00:17:48.699 "seek_data": false, 00:17:48.699 "copy": true, 00:17:48.699 "nvme_iov_md": false 00:17:48.699 }, 00:17:48.699 "memory_domains": [ 00:17:48.699 { 00:17:48.699 "dma_device_id": "system", 00:17:48.699 "dma_device_type": 1 00:17:48.699 }, 00:17:48.699 { 00:17:48.699 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:48.699 "dma_device_type": 2 00:17:48.699 } 00:17:48.699 ], 00:17:48.699 "driver_specific": {} 00:17:48.699 } 00:17:48.699 ] 00:17:48.699 21:44:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.699 21:44:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:17:48.699 21:44:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:17:48.699 21:44:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:48.699 21:44:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:17:48.699 21:44:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.699 21:44:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:48.699 BaseBdev4 00:17:48.699 21:44:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.699 21:44:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:17:48.699 21:44:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:17:48.699 21:44:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:48.699 21:44:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:17:48.699 21:44:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:48.699 21:44:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:48.699 21:44:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:48.699 21:44:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.699 21:44:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:48.699 21:44:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.699 21:44:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:17:48.699 21:44:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.699 21:44:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:48.699 [ 00:17:48.699 { 00:17:48.699 "name": "BaseBdev4", 00:17:48.699 "aliases": [ 00:17:48.699 "075dcf75-1e65-470b-b6a0-54bf0a33de9e" 00:17:48.699 ], 00:17:48.699 "product_name": "Malloc disk", 00:17:48.699 "block_size": 512, 00:17:48.699 "num_blocks": 65536, 00:17:48.699 "uuid": "075dcf75-1e65-470b-b6a0-54bf0a33de9e", 00:17:48.699 "assigned_rate_limits": { 00:17:48.699 "rw_ios_per_sec": 0, 00:17:48.699 "rw_mbytes_per_sec": 0, 00:17:48.699 "r_mbytes_per_sec": 0, 00:17:48.699 "w_mbytes_per_sec": 0 00:17:48.699 }, 00:17:48.699 "claimed": false, 00:17:48.699 "zoned": false, 00:17:48.699 "supported_io_types": { 00:17:48.699 "read": true, 00:17:48.699 "write": true, 00:17:48.699 "unmap": true, 00:17:48.699 "flush": true, 00:17:48.699 "reset": true, 00:17:48.699 "nvme_admin": false, 00:17:48.699 "nvme_io": false, 00:17:48.699 "nvme_io_md": false, 00:17:48.699 "write_zeroes": true, 00:17:48.699 "zcopy": true, 00:17:48.699 "get_zone_info": false, 00:17:48.699 "zone_management": false, 00:17:48.699 "zone_append": false, 00:17:48.699 "compare": false, 00:17:48.699 "compare_and_write": false, 00:17:48.699 "abort": true, 00:17:48.699 "seek_hole": false, 00:17:48.699 "seek_data": false, 00:17:48.699 "copy": true, 00:17:48.699 "nvme_iov_md": false 00:17:48.699 }, 00:17:48.699 "memory_domains": [ 00:17:48.699 { 00:17:48.699 "dma_device_id": "system", 00:17:48.699 "dma_device_type": 1 00:17:48.699 }, 00:17:48.699 { 00:17:48.699 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:48.699 "dma_device_type": 2 00:17:48.699 } 00:17:48.699 ], 00:17:48.699 "driver_specific": {} 00:17:48.699 } 00:17:48.699 ] 00:17:48.699 21:44:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.699 21:44:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:17:48.699 21:44:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:17:48.699 21:44:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:48.699 21:44:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:17:48.699 21:44:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.699 21:44:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:48.699 [2024-12-10 21:44:49.441570] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:48.699 [2024-12-10 21:44:49.441609] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:48.699 [2024-12-10 21:44:49.441630] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:48.699 [2024-12-10 21:44:49.443380] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:48.699 [2024-12-10 21:44:49.443448] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:48.699 21:44:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.699 21:44:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:48.699 21:44:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:48.699 21:44:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:48.699 21:44:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:48.699 21:44:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:48.699 21:44:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:48.699 21:44:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:48.699 21:44:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:48.699 21:44:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:48.699 21:44:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:48.699 21:44:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:48.699 21:44:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.699 21:44:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:48.699 21:44:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:48.972 21:44:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.972 21:44:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:48.972 "name": "Existed_Raid", 00:17:48.972 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:48.972 "strip_size_kb": 64, 00:17:48.972 "state": "configuring", 00:17:48.972 "raid_level": "raid5f", 00:17:48.972 "superblock": false, 00:17:48.972 "num_base_bdevs": 4, 00:17:48.972 "num_base_bdevs_discovered": 3, 00:17:48.972 "num_base_bdevs_operational": 4, 00:17:48.972 "base_bdevs_list": [ 00:17:48.972 { 00:17:48.972 "name": "BaseBdev1", 00:17:48.972 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:48.972 "is_configured": false, 00:17:48.972 "data_offset": 0, 00:17:48.972 "data_size": 0 00:17:48.972 }, 00:17:48.972 { 00:17:48.972 "name": "BaseBdev2", 00:17:48.972 "uuid": "5fee2b73-b798-4d76-b18d-6983240cfe7e", 00:17:48.972 "is_configured": true, 00:17:48.972 "data_offset": 0, 00:17:48.972 "data_size": 65536 00:17:48.972 }, 00:17:48.972 { 00:17:48.972 "name": "BaseBdev3", 00:17:48.972 "uuid": "6f9e5977-3914-4937-91dd-47c6fcdc3480", 00:17:48.972 "is_configured": true, 00:17:48.972 "data_offset": 0, 00:17:48.972 "data_size": 65536 00:17:48.972 }, 00:17:48.972 { 00:17:48.972 "name": "BaseBdev4", 00:17:48.972 "uuid": "075dcf75-1e65-470b-b6a0-54bf0a33de9e", 00:17:48.972 "is_configured": true, 00:17:48.972 "data_offset": 0, 00:17:48.972 "data_size": 65536 00:17:48.972 } 00:17:48.972 ] 00:17:48.972 }' 00:17:48.972 21:44:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:48.972 21:44:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:49.231 21:44:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:17:49.231 21:44:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.231 21:44:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:49.231 [2024-12-10 21:44:49.860906] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:49.231 21:44:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.231 21:44:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:49.231 21:44:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:49.231 21:44:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:49.231 21:44:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:49.231 21:44:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:49.231 21:44:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:49.231 21:44:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:49.231 21:44:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:49.231 21:44:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:49.231 21:44:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:49.231 21:44:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:49.231 21:44:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:49.231 21:44:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.231 21:44:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:49.231 21:44:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.231 21:44:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:49.231 "name": "Existed_Raid", 00:17:49.231 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:49.231 "strip_size_kb": 64, 00:17:49.231 "state": "configuring", 00:17:49.231 "raid_level": "raid5f", 00:17:49.231 "superblock": false, 00:17:49.231 "num_base_bdevs": 4, 00:17:49.231 "num_base_bdevs_discovered": 2, 00:17:49.231 "num_base_bdevs_operational": 4, 00:17:49.231 "base_bdevs_list": [ 00:17:49.231 { 00:17:49.231 "name": "BaseBdev1", 00:17:49.231 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:49.231 "is_configured": false, 00:17:49.231 "data_offset": 0, 00:17:49.231 "data_size": 0 00:17:49.231 }, 00:17:49.231 { 00:17:49.231 "name": null, 00:17:49.231 "uuid": "5fee2b73-b798-4d76-b18d-6983240cfe7e", 00:17:49.231 "is_configured": false, 00:17:49.231 "data_offset": 0, 00:17:49.231 "data_size": 65536 00:17:49.231 }, 00:17:49.231 { 00:17:49.231 "name": "BaseBdev3", 00:17:49.231 "uuid": "6f9e5977-3914-4937-91dd-47c6fcdc3480", 00:17:49.231 "is_configured": true, 00:17:49.231 "data_offset": 0, 00:17:49.231 "data_size": 65536 00:17:49.231 }, 00:17:49.231 { 00:17:49.231 "name": "BaseBdev4", 00:17:49.231 "uuid": "075dcf75-1e65-470b-b6a0-54bf0a33de9e", 00:17:49.231 "is_configured": true, 00:17:49.231 "data_offset": 0, 00:17:49.231 "data_size": 65536 00:17:49.231 } 00:17:49.231 ] 00:17:49.231 }' 00:17:49.231 21:44:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:49.231 21:44:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:49.800 21:44:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:49.800 21:44:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.800 21:44:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:49.800 21:44:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:17:49.800 21:44:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.800 21:44:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:17:49.800 21:44:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:17:49.800 21:44:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.800 21:44:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:49.800 [2024-12-10 21:44:50.410479] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:49.800 BaseBdev1 00:17:49.800 21:44:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.800 21:44:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:17:49.800 21:44:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:17:49.800 21:44:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:49.800 21:44:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:17:49.800 21:44:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:49.800 21:44:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:49.800 21:44:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:49.800 21:44:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.800 21:44:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:49.800 21:44:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.800 21:44:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:49.800 21:44:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.800 21:44:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:49.800 [ 00:17:49.800 { 00:17:49.800 "name": "BaseBdev1", 00:17:49.800 "aliases": [ 00:17:49.800 "a717780a-eb68-49fc-9296-2b46bbd2e0ef" 00:17:49.800 ], 00:17:49.800 "product_name": "Malloc disk", 00:17:49.800 "block_size": 512, 00:17:49.800 "num_blocks": 65536, 00:17:49.800 "uuid": "a717780a-eb68-49fc-9296-2b46bbd2e0ef", 00:17:49.800 "assigned_rate_limits": { 00:17:49.800 "rw_ios_per_sec": 0, 00:17:49.800 "rw_mbytes_per_sec": 0, 00:17:49.800 "r_mbytes_per_sec": 0, 00:17:49.800 "w_mbytes_per_sec": 0 00:17:49.801 }, 00:17:49.801 "claimed": true, 00:17:49.801 "claim_type": "exclusive_write", 00:17:49.801 "zoned": false, 00:17:49.801 "supported_io_types": { 00:17:49.801 "read": true, 00:17:49.801 "write": true, 00:17:49.801 "unmap": true, 00:17:49.801 "flush": true, 00:17:49.801 "reset": true, 00:17:49.801 "nvme_admin": false, 00:17:49.801 "nvme_io": false, 00:17:49.801 "nvme_io_md": false, 00:17:49.801 "write_zeroes": true, 00:17:49.801 "zcopy": true, 00:17:49.801 "get_zone_info": false, 00:17:49.801 "zone_management": false, 00:17:49.801 "zone_append": false, 00:17:49.801 "compare": false, 00:17:49.801 "compare_and_write": false, 00:17:49.801 "abort": true, 00:17:49.801 "seek_hole": false, 00:17:49.801 "seek_data": false, 00:17:49.801 "copy": true, 00:17:49.801 "nvme_iov_md": false 00:17:49.801 }, 00:17:49.801 "memory_domains": [ 00:17:49.801 { 00:17:49.801 "dma_device_id": "system", 00:17:49.801 "dma_device_type": 1 00:17:49.801 }, 00:17:49.801 { 00:17:49.801 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:49.801 "dma_device_type": 2 00:17:49.801 } 00:17:49.801 ], 00:17:49.801 "driver_specific": {} 00:17:49.801 } 00:17:49.801 ] 00:17:49.801 21:44:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.801 21:44:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:17:49.801 21:44:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:49.801 21:44:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:49.801 21:44:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:49.801 21:44:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:49.801 21:44:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:49.801 21:44:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:49.801 21:44:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:49.801 21:44:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:49.801 21:44:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:49.801 21:44:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:49.801 21:44:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:49.801 21:44:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.801 21:44:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:49.801 21:44:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:49.801 21:44:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.801 21:44:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:49.801 "name": "Existed_Raid", 00:17:49.801 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:49.801 "strip_size_kb": 64, 00:17:49.801 "state": "configuring", 00:17:49.801 "raid_level": "raid5f", 00:17:49.801 "superblock": false, 00:17:49.801 "num_base_bdevs": 4, 00:17:49.801 "num_base_bdevs_discovered": 3, 00:17:49.801 "num_base_bdevs_operational": 4, 00:17:49.801 "base_bdevs_list": [ 00:17:49.801 { 00:17:49.801 "name": "BaseBdev1", 00:17:49.801 "uuid": "a717780a-eb68-49fc-9296-2b46bbd2e0ef", 00:17:49.801 "is_configured": true, 00:17:49.801 "data_offset": 0, 00:17:49.801 "data_size": 65536 00:17:49.801 }, 00:17:49.801 { 00:17:49.801 "name": null, 00:17:49.801 "uuid": "5fee2b73-b798-4d76-b18d-6983240cfe7e", 00:17:49.801 "is_configured": false, 00:17:49.801 "data_offset": 0, 00:17:49.801 "data_size": 65536 00:17:49.801 }, 00:17:49.801 { 00:17:49.801 "name": "BaseBdev3", 00:17:49.801 "uuid": "6f9e5977-3914-4937-91dd-47c6fcdc3480", 00:17:49.801 "is_configured": true, 00:17:49.801 "data_offset": 0, 00:17:49.801 "data_size": 65536 00:17:49.801 }, 00:17:49.801 { 00:17:49.801 "name": "BaseBdev4", 00:17:49.801 "uuid": "075dcf75-1e65-470b-b6a0-54bf0a33de9e", 00:17:49.801 "is_configured": true, 00:17:49.801 "data_offset": 0, 00:17:49.801 "data_size": 65536 00:17:49.801 } 00:17:49.801 ] 00:17:49.801 }' 00:17:49.801 21:44:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:49.801 21:44:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:50.060 21:44:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:17:50.060 21:44:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:50.060 21:44:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.060 21:44:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:50.319 21:44:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.319 21:44:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:17:50.319 21:44:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:17:50.319 21:44:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.319 21:44:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:50.319 [2024-12-10 21:44:50.849773] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:50.319 21:44:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.319 21:44:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:50.319 21:44:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:50.319 21:44:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:50.319 21:44:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:50.319 21:44:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:50.319 21:44:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:50.319 21:44:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:50.319 21:44:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:50.319 21:44:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:50.319 21:44:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:50.319 21:44:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:50.319 21:44:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:50.319 21:44:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.319 21:44:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:50.319 21:44:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.319 21:44:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:50.319 "name": "Existed_Raid", 00:17:50.319 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:50.319 "strip_size_kb": 64, 00:17:50.319 "state": "configuring", 00:17:50.319 "raid_level": "raid5f", 00:17:50.319 "superblock": false, 00:17:50.319 "num_base_bdevs": 4, 00:17:50.319 "num_base_bdevs_discovered": 2, 00:17:50.319 "num_base_bdevs_operational": 4, 00:17:50.319 "base_bdevs_list": [ 00:17:50.319 { 00:17:50.319 "name": "BaseBdev1", 00:17:50.319 "uuid": "a717780a-eb68-49fc-9296-2b46bbd2e0ef", 00:17:50.319 "is_configured": true, 00:17:50.319 "data_offset": 0, 00:17:50.319 "data_size": 65536 00:17:50.319 }, 00:17:50.319 { 00:17:50.319 "name": null, 00:17:50.319 "uuid": "5fee2b73-b798-4d76-b18d-6983240cfe7e", 00:17:50.319 "is_configured": false, 00:17:50.319 "data_offset": 0, 00:17:50.319 "data_size": 65536 00:17:50.319 }, 00:17:50.319 { 00:17:50.319 "name": null, 00:17:50.319 "uuid": "6f9e5977-3914-4937-91dd-47c6fcdc3480", 00:17:50.319 "is_configured": false, 00:17:50.319 "data_offset": 0, 00:17:50.319 "data_size": 65536 00:17:50.319 }, 00:17:50.319 { 00:17:50.319 "name": "BaseBdev4", 00:17:50.319 "uuid": "075dcf75-1e65-470b-b6a0-54bf0a33de9e", 00:17:50.319 "is_configured": true, 00:17:50.319 "data_offset": 0, 00:17:50.319 "data_size": 65536 00:17:50.319 } 00:17:50.319 ] 00:17:50.319 }' 00:17:50.319 21:44:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:50.319 21:44:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:50.577 21:44:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:50.577 21:44:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.577 21:44:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:50.577 21:44:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:17:50.577 21:44:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.577 21:44:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:17:50.577 21:44:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:17:50.577 21:44:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.578 21:44:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:50.578 [2024-12-10 21:44:51.301019] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:50.578 21:44:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.578 21:44:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:50.578 21:44:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:50.578 21:44:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:50.578 21:44:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:50.578 21:44:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:50.578 21:44:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:50.578 21:44:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:50.578 21:44:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:50.578 21:44:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:50.578 21:44:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:50.578 21:44:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:50.578 21:44:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.578 21:44:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:50.578 21:44:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:50.578 21:44:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.578 21:44:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:50.578 "name": "Existed_Raid", 00:17:50.578 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:50.578 "strip_size_kb": 64, 00:17:50.578 "state": "configuring", 00:17:50.578 "raid_level": "raid5f", 00:17:50.578 "superblock": false, 00:17:50.578 "num_base_bdevs": 4, 00:17:50.578 "num_base_bdevs_discovered": 3, 00:17:50.578 "num_base_bdevs_operational": 4, 00:17:50.578 "base_bdevs_list": [ 00:17:50.578 { 00:17:50.578 "name": "BaseBdev1", 00:17:50.578 "uuid": "a717780a-eb68-49fc-9296-2b46bbd2e0ef", 00:17:50.578 "is_configured": true, 00:17:50.578 "data_offset": 0, 00:17:50.578 "data_size": 65536 00:17:50.578 }, 00:17:50.578 { 00:17:50.578 "name": null, 00:17:50.578 "uuid": "5fee2b73-b798-4d76-b18d-6983240cfe7e", 00:17:50.578 "is_configured": false, 00:17:50.578 "data_offset": 0, 00:17:50.578 "data_size": 65536 00:17:50.578 }, 00:17:50.578 { 00:17:50.578 "name": "BaseBdev3", 00:17:50.578 "uuid": "6f9e5977-3914-4937-91dd-47c6fcdc3480", 00:17:50.578 "is_configured": true, 00:17:50.578 "data_offset": 0, 00:17:50.578 "data_size": 65536 00:17:50.578 }, 00:17:50.578 { 00:17:50.578 "name": "BaseBdev4", 00:17:50.578 "uuid": "075dcf75-1e65-470b-b6a0-54bf0a33de9e", 00:17:50.578 "is_configured": true, 00:17:50.578 "data_offset": 0, 00:17:50.578 "data_size": 65536 00:17:50.578 } 00:17:50.578 ] 00:17:50.578 }' 00:17:50.578 21:44:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:50.578 21:44:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:51.144 21:44:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:51.144 21:44:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.144 21:44:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:51.144 21:44:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:17:51.144 21:44:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.144 21:44:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:17:51.144 21:44:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:17:51.144 21:44:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.144 21:44:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:51.144 [2024-12-10 21:44:51.776258] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:51.144 21:44:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.144 21:44:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:51.144 21:44:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:51.144 21:44:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:51.144 21:44:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:51.144 21:44:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:51.144 21:44:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:51.144 21:44:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:51.144 21:44:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:51.144 21:44:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:51.144 21:44:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:51.144 21:44:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:51.144 21:44:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.144 21:44:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:51.144 21:44:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:51.144 21:44:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.404 21:44:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:51.404 "name": "Existed_Raid", 00:17:51.404 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:51.404 "strip_size_kb": 64, 00:17:51.404 "state": "configuring", 00:17:51.404 "raid_level": "raid5f", 00:17:51.404 "superblock": false, 00:17:51.404 "num_base_bdevs": 4, 00:17:51.404 "num_base_bdevs_discovered": 2, 00:17:51.404 "num_base_bdevs_operational": 4, 00:17:51.404 "base_bdevs_list": [ 00:17:51.404 { 00:17:51.404 "name": null, 00:17:51.404 "uuid": "a717780a-eb68-49fc-9296-2b46bbd2e0ef", 00:17:51.404 "is_configured": false, 00:17:51.404 "data_offset": 0, 00:17:51.404 "data_size": 65536 00:17:51.404 }, 00:17:51.404 { 00:17:51.404 "name": null, 00:17:51.404 "uuid": "5fee2b73-b798-4d76-b18d-6983240cfe7e", 00:17:51.404 "is_configured": false, 00:17:51.404 "data_offset": 0, 00:17:51.404 "data_size": 65536 00:17:51.404 }, 00:17:51.404 { 00:17:51.404 "name": "BaseBdev3", 00:17:51.404 "uuid": "6f9e5977-3914-4937-91dd-47c6fcdc3480", 00:17:51.404 "is_configured": true, 00:17:51.404 "data_offset": 0, 00:17:51.404 "data_size": 65536 00:17:51.404 }, 00:17:51.404 { 00:17:51.404 "name": "BaseBdev4", 00:17:51.404 "uuid": "075dcf75-1e65-470b-b6a0-54bf0a33de9e", 00:17:51.404 "is_configured": true, 00:17:51.404 "data_offset": 0, 00:17:51.404 "data_size": 65536 00:17:51.404 } 00:17:51.404 ] 00:17:51.404 }' 00:17:51.404 21:44:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:51.404 21:44:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:51.663 21:44:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:17:51.663 21:44:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:51.663 21:44:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.663 21:44:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:51.663 21:44:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.663 21:44:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:17:51.663 21:44:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:17:51.663 21:44:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.663 21:44:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:51.663 [2024-12-10 21:44:52.375862] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:51.663 21:44:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.663 21:44:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:51.663 21:44:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:51.663 21:44:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:51.663 21:44:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:51.663 21:44:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:51.663 21:44:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:51.663 21:44:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:51.663 21:44:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:51.663 21:44:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:51.663 21:44:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:51.663 21:44:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:51.663 21:44:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:51.663 21:44:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.663 21:44:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:51.663 21:44:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.663 21:44:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:51.663 "name": "Existed_Raid", 00:17:51.663 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:51.663 "strip_size_kb": 64, 00:17:51.663 "state": "configuring", 00:17:51.663 "raid_level": "raid5f", 00:17:51.663 "superblock": false, 00:17:51.663 "num_base_bdevs": 4, 00:17:51.663 "num_base_bdevs_discovered": 3, 00:17:51.663 "num_base_bdevs_operational": 4, 00:17:51.663 "base_bdevs_list": [ 00:17:51.663 { 00:17:51.663 "name": null, 00:17:51.663 "uuid": "a717780a-eb68-49fc-9296-2b46bbd2e0ef", 00:17:51.663 "is_configured": false, 00:17:51.663 "data_offset": 0, 00:17:51.663 "data_size": 65536 00:17:51.663 }, 00:17:51.663 { 00:17:51.663 "name": "BaseBdev2", 00:17:51.663 "uuid": "5fee2b73-b798-4d76-b18d-6983240cfe7e", 00:17:51.663 "is_configured": true, 00:17:51.663 "data_offset": 0, 00:17:51.663 "data_size": 65536 00:17:51.663 }, 00:17:51.663 { 00:17:51.663 "name": "BaseBdev3", 00:17:51.663 "uuid": "6f9e5977-3914-4937-91dd-47c6fcdc3480", 00:17:51.663 "is_configured": true, 00:17:51.663 "data_offset": 0, 00:17:51.663 "data_size": 65536 00:17:51.663 }, 00:17:51.663 { 00:17:51.663 "name": "BaseBdev4", 00:17:51.663 "uuid": "075dcf75-1e65-470b-b6a0-54bf0a33de9e", 00:17:51.663 "is_configured": true, 00:17:51.663 "data_offset": 0, 00:17:51.663 "data_size": 65536 00:17:51.663 } 00:17:51.663 ] 00:17:51.663 }' 00:17:51.663 21:44:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:51.663 21:44:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:52.231 21:44:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:52.231 21:44:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.231 21:44:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:17:52.231 21:44:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:52.231 21:44:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.231 21:44:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:17:52.231 21:44:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:52.231 21:44:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.231 21:44:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:52.231 21:44:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:17:52.231 21:44:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.231 21:44:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u a717780a-eb68-49fc-9296-2b46bbd2e0ef 00:17:52.231 21:44:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.231 21:44:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:52.231 [2024-12-10 21:44:52.934383] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:17:52.231 [2024-12-10 21:44:52.934459] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:17:52.231 [2024-12-10 21:44:52.934468] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:17:52.231 [2024-12-10 21:44:52.934735] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:17:52.231 [2024-12-10 21:44:52.942331] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:17:52.231 [2024-12-10 21:44:52.942362] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:17:52.231 [2024-12-10 21:44:52.942652] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:52.231 NewBaseBdev 00:17:52.231 21:44:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.231 21:44:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:17:52.231 21:44:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:17:52.231 21:44:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:52.231 21:44:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:17:52.231 21:44:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:52.231 21:44:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:52.231 21:44:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:52.231 21:44:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.231 21:44:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:52.231 21:44:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.231 21:44:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:17:52.231 21:44:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.231 21:44:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:52.231 [ 00:17:52.231 { 00:17:52.231 "name": "NewBaseBdev", 00:17:52.231 "aliases": [ 00:17:52.231 "a717780a-eb68-49fc-9296-2b46bbd2e0ef" 00:17:52.231 ], 00:17:52.231 "product_name": "Malloc disk", 00:17:52.231 "block_size": 512, 00:17:52.231 "num_blocks": 65536, 00:17:52.231 "uuid": "a717780a-eb68-49fc-9296-2b46bbd2e0ef", 00:17:52.231 "assigned_rate_limits": { 00:17:52.231 "rw_ios_per_sec": 0, 00:17:52.231 "rw_mbytes_per_sec": 0, 00:17:52.231 "r_mbytes_per_sec": 0, 00:17:52.231 "w_mbytes_per_sec": 0 00:17:52.231 }, 00:17:52.231 "claimed": true, 00:17:52.231 "claim_type": "exclusive_write", 00:17:52.231 "zoned": false, 00:17:52.231 "supported_io_types": { 00:17:52.231 "read": true, 00:17:52.231 "write": true, 00:17:52.231 "unmap": true, 00:17:52.231 "flush": true, 00:17:52.231 "reset": true, 00:17:52.231 "nvme_admin": false, 00:17:52.231 "nvme_io": false, 00:17:52.231 "nvme_io_md": false, 00:17:52.231 "write_zeroes": true, 00:17:52.231 "zcopy": true, 00:17:52.231 "get_zone_info": false, 00:17:52.231 "zone_management": false, 00:17:52.231 "zone_append": false, 00:17:52.231 "compare": false, 00:17:52.231 "compare_and_write": false, 00:17:52.231 "abort": true, 00:17:52.231 "seek_hole": false, 00:17:52.231 "seek_data": false, 00:17:52.231 "copy": true, 00:17:52.231 "nvme_iov_md": false 00:17:52.231 }, 00:17:52.231 "memory_domains": [ 00:17:52.231 { 00:17:52.231 "dma_device_id": "system", 00:17:52.231 "dma_device_type": 1 00:17:52.231 }, 00:17:52.231 { 00:17:52.231 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:52.231 "dma_device_type": 2 00:17:52.231 } 00:17:52.231 ], 00:17:52.231 "driver_specific": {} 00:17:52.231 } 00:17:52.231 ] 00:17:52.231 21:44:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.231 21:44:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:17:52.231 21:44:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:17:52.231 21:44:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:52.231 21:44:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:52.231 21:44:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:52.231 21:44:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:52.231 21:44:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:52.231 21:44:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:52.231 21:44:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:52.231 21:44:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:52.231 21:44:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:52.231 21:44:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:52.231 21:44:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:52.231 21:44:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.231 21:44:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:52.231 21:44:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.489 21:44:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:52.489 "name": "Existed_Raid", 00:17:52.489 "uuid": "b6af3274-da69-4713-8b06-89f3c55a038e", 00:17:52.489 "strip_size_kb": 64, 00:17:52.489 "state": "online", 00:17:52.489 "raid_level": "raid5f", 00:17:52.489 "superblock": false, 00:17:52.489 "num_base_bdevs": 4, 00:17:52.489 "num_base_bdevs_discovered": 4, 00:17:52.489 "num_base_bdevs_operational": 4, 00:17:52.489 "base_bdevs_list": [ 00:17:52.489 { 00:17:52.489 "name": "NewBaseBdev", 00:17:52.489 "uuid": "a717780a-eb68-49fc-9296-2b46bbd2e0ef", 00:17:52.489 "is_configured": true, 00:17:52.489 "data_offset": 0, 00:17:52.489 "data_size": 65536 00:17:52.489 }, 00:17:52.489 { 00:17:52.489 "name": "BaseBdev2", 00:17:52.489 "uuid": "5fee2b73-b798-4d76-b18d-6983240cfe7e", 00:17:52.489 "is_configured": true, 00:17:52.489 "data_offset": 0, 00:17:52.489 "data_size": 65536 00:17:52.489 }, 00:17:52.489 { 00:17:52.489 "name": "BaseBdev3", 00:17:52.489 "uuid": "6f9e5977-3914-4937-91dd-47c6fcdc3480", 00:17:52.489 "is_configured": true, 00:17:52.489 "data_offset": 0, 00:17:52.489 "data_size": 65536 00:17:52.489 }, 00:17:52.489 { 00:17:52.489 "name": "BaseBdev4", 00:17:52.489 "uuid": "075dcf75-1e65-470b-b6a0-54bf0a33de9e", 00:17:52.489 "is_configured": true, 00:17:52.489 "data_offset": 0, 00:17:52.489 "data_size": 65536 00:17:52.489 } 00:17:52.489 ] 00:17:52.489 }' 00:17:52.489 21:44:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:52.489 21:44:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:52.748 21:44:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:17:52.748 21:44:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:17:52.748 21:44:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:52.748 21:44:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:52.748 21:44:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:17:52.748 21:44:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:52.748 21:44:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:52.748 21:44:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:17:52.748 21:44:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.748 21:44:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:52.748 [2024-12-10 21:44:53.431318] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:52.748 21:44:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.748 21:44:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:52.748 "name": "Existed_Raid", 00:17:52.748 "aliases": [ 00:17:52.748 "b6af3274-da69-4713-8b06-89f3c55a038e" 00:17:52.748 ], 00:17:52.748 "product_name": "Raid Volume", 00:17:52.748 "block_size": 512, 00:17:52.748 "num_blocks": 196608, 00:17:52.748 "uuid": "b6af3274-da69-4713-8b06-89f3c55a038e", 00:17:52.748 "assigned_rate_limits": { 00:17:52.748 "rw_ios_per_sec": 0, 00:17:52.748 "rw_mbytes_per_sec": 0, 00:17:52.748 "r_mbytes_per_sec": 0, 00:17:52.748 "w_mbytes_per_sec": 0 00:17:52.748 }, 00:17:52.748 "claimed": false, 00:17:52.748 "zoned": false, 00:17:52.748 "supported_io_types": { 00:17:52.748 "read": true, 00:17:52.748 "write": true, 00:17:52.748 "unmap": false, 00:17:52.748 "flush": false, 00:17:52.748 "reset": true, 00:17:52.748 "nvme_admin": false, 00:17:52.748 "nvme_io": false, 00:17:52.748 "nvme_io_md": false, 00:17:52.748 "write_zeroes": true, 00:17:52.748 "zcopy": false, 00:17:52.748 "get_zone_info": false, 00:17:52.748 "zone_management": false, 00:17:52.748 "zone_append": false, 00:17:52.748 "compare": false, 00:17:52.748 "compare_and_write": false, 00:17:52.748 "abort": false, 00:17:52.748 "seek_hole": false, 00:17:52.748 "seek_data": false, 00:17:52.748 "copy": false, 00:17:52.748 "nvme_iov_md": false 00:17:52.748 }, 00:17:52.748 "driver_specific": { 00:17:52.748 "raid": { 00:17:52.748 "uuid": "b6af3274-da69-4713-8b06-89f3c55a038e", 00:17:52.748 "strip_size_kb": 64, 00:17:52.748 "state": "online", 00:17:52.748 "raid_level": "raid5f", 00:17:52.748 "superblock": false, 00:17:52.748 "num_base_bdevs": 4, 00:17:52.748 "num_base_bdevs_discovered": 4, 00:17:52.748 "num_base_bdevs_operational": 4, 00:17:52.748 "base_bdevs_list": [ 00:17:52.748 { 00:17:52.748 "name": "NewBaseBdev", 00:17:52.748 "uuid": "a717780a-eb68-49fc-9296-2b46bbd2e0ef", 00:17:52.748 "is_configured": true, 00:17:52.748 "data_offset": 0, 00:17:52.748 "data_size": 65536 00:17:52.748 }, 00:17:52.748 { 00:17:52.748 "name": "BaseBdev2", 00:17:52.748 "uuid": "5fee2b73-b798-4d76-b18d-6983240cfe7e", 00:17:52.748 "is_configured": true, 00:17:52.748 "data_offset": 0, 00:17:52.748 "data_size": 65536 00:17:52.748 }, 00:17:52.748 { 00:17:52.748 "name": "BaseBdev3", 00:17:52.748 "uuid": "6f9e5977-3914-4937-91dd-47c6fcdc3480", 00:17:52.748 "is_configured": true, 00:17:52.748 "data_offset": 0, 00:17:52.748 "data_size": 65536 00:17:52.748 }, 00:17:52.748 { 00:17:52.748 "name": "BaseBdev4", 00:17:52.748 "uuid": "075dcf75-1e65-470b-b6a0-54bf0a33de9e", 00:17:52.748 "is_configured": true, 00:17:52.748 "data_offset": 0, 00:17:52.748 "data_size": 65536 00:17:52.749 } 00:17:52.749 ] 00:17:52.749 } 00:17:52.749 } 00:17:52.749 }' 00:17:52.749 21:44:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:52.749 21:44:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:17:52.749 BaseBdev2 00:17:52.749 BaseBdev3 00:17:52.749 BaseBdev4' 00:17:52.749 21:44:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:53.007 21:44:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:17:53.007 21:44:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:53.007 21:44:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:53.007 21:44:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:17:53.007 21:44:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.007 21:44:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:53.007 21:44:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.007 21:44:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:53.007 21:44:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:53.007 21:44:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:53.007 21:44:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:17:53.007 21:44:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:53.007 21:44:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.007 21:44:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:53.007 21:44:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.007 21:44:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:53.007 21:44:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:53.007 21:44:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:53.007 21:44:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:53.007 21:44:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:17:53.007 21:44:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.007 21:44:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:53.007 21:44:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.007 21:44:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:53.007 21:44:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:53.007 21:44:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:53.007 21:44:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:17:53.007 21:44:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.007 21:44:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:53.007 21:44:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:53.007 21:44:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.007 21:44:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:53.007 21:44:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:53.007 21:44:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:53.007 21:44:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.007 21:44:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:53.007 [2024-12-10 21:44:53.698619] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:53.007 [2024-12-10 21:44:53.698654] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:53.007 [2024-12-10 21:44:53.698728] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:53.007 [2024-12-10 21:44:53.699053] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:53.007 [2024-12-10 21:44:53.699074] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:17:53.007 21:44:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.007 21:44:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 82959 00:17:53.007 21:44:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 82959 ']' 00:17:53.007 21:44:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # kill -0 82959 00:17:53.007 21:44:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # uname 00:17:53.007 21:44:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:53.007 21:44:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82959 00:17:53.007 21:44:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:53.007 21:44:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:53.007 killing process with pid 82959 00:17:53.007 21:44:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82959' 00:17:53.007 21:44:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@973 -- # kill 82959 00:17:53.007 [2024-12-10 21:44:53.747609] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:53.007 21:44:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@978 -- # wait 82959 00:17:53.574 [2024-12-10 21:44:54.157677] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:54.514 21:44:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:17:54.514 00:17:54.514 real 0m11.529s 00:17:54.514 user 0m18.317s 00:17:54.514 sys 0m2.028s 00:17:54.514 21:44:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:54.514 21:44:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:54.514 ************************************ 00:17:54.514 END TEST raid5f_state_function_test 00:17:54.514 ************************************ 00:17:54.773 21:44:55 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 4 true 00:17:54.773 21:44:55 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:17:54.773 21:44:55 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:54.773 21:44:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:54.773 ************************************ 00:17:54.773 START TEST raid5f_state_function_test_sb 00:17:54.773 ************************************ 00:17:54.773 21:44:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 4 true 00:17:54.773 21:44:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:17:54.773 21:44:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:17:54.773 21:44:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:17:54.773 21:44:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:17:54.773 21:44:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:17:54.773 21:44:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:54.773 21:44:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:17:54.773 21:44:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:54.773 21:44:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:54.773 21:44:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:17:54.773 21:44:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:54.773 21:44:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:54.773 21:44:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:17:54.773 21:44:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:54.773 21:44:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:54.773 21:44:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:17:54.773 21:44:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:54.773 21:44:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:54.773 21:44:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:17:54.773 21:44:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:17:54.773 21:44:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:17:54.773 21:44:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:17:54.773 21:44:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:17:54.773 21:44:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:17:54.773 21:44:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:17:54.773 21:44:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:17:54.773 21:44:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:17:54.773 21:44:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:17:54.773 21:44:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:17:54.773 21:44:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=83633 00:17:54.773 21:44:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:17:54.773 Process raid pid: 83633 00:17:54.773 21:44:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 83633' 00:17:54.773 21:44:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 83633 00:17:54.773 21:44:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 83633 ']' 00:17:54.773 21:44:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:54.773 21:44:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:54.773 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:54.773 21:44:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:54.773 21:44:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:54.773 21:44:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:54.773 [2024-12-10 21:44:55.456718] Starting SPDK v25.01-pre git sha1 cec5ba284 / DPDK 24.03.0 initialization... 00:17:54.773 [2024-12-10 21:44:55.456832] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:55.031 [2024-12-10 21:44:55.631155] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:55.031 [2024-12-10 21:44:55.744143] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:17:55.290 [2024-12-10 21:44:55.943707] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:55.290 [2024-12-10 21:44:55.943748] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:55.549 21:44:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:55.549 21:44:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:17:55.549 21:44:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:17:55.549 21:44:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.549 21:44:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:55.550 [2024-12-10 21:44:56.288139] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:55.550 [2024-12-10 21:44:56.288191] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:55.550 [2024-12-10 21:44:56.288201] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:55.550 [2024-12-10 21:44:56.288210] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:55.550 [2024-12-10 21:44:56.288217] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:55.550 [2024-12-10 21:44:56.288225] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:55.550 [2024-12-10 21:44:56.288231] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:55.550 [2024-12-10 21:44:56.288239] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:55.550 21:44:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.550 21:44:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:55.550 21:44:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:55.550 21:44:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:55.550 21:44:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:55.550 21:44:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:55.550 21:44:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:55.550 21:44:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:55.550 21:44:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:55.550 21:44:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:55.550 21:44:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:55.550 21:44:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:55.550 21:44:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:55.550 21:44:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.550 21:44:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:55.550 21:44:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.809 21:44:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:55.809 "name": "Existed_Raid", 00:17:55.809 "uuid": "8c88e40a-a093-437b-9af3-a89dd5258511", 00:17:55.809 "strip_size_kb": 64, 00:17:55.809 "state": "configuring", 00:17:55.809 "raid_level": "raid5f", 00:17:55.809 "superblock": true, 00:17:55.809 "num_base_bdevs": 4, 00:17:55.809 "num_base_bdevs_discovered": 0, 00:17:55.809 "num_base_bdevs_operational": 4, 00:17:55.809 "base_bdevs_list": [ 00:17:55.809 { 00:17:55.809 "name": "BaseBdev1", 00:17:55.809 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:55.809 "is_configured": false, 00:17:55.809 "data_offset": 0, 00:17:55.809 "data_size": 0 00:17:55.809 }, 00:17:55.809 { 00:17:55.809 "name": "BaseBdev2", 00:17:55.809 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:55.809 "is_configured": false, 00:17:55.810 "data_offset": 0, 00:17:55.810 "data_size": 0 00:17:55.810 }, 00:17:55.810 { 00:17:55.810 "name": "BaseBdev3", 00:17:55.810 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:55.810 "is_configured": false, 00:17:55.810 "data_offset": 0, 00:17:55.810 "data_size": 0 00:17:55.810 }, 00:17:55.810 { 00:17:55.810 "name": "BaseBdev4", 00:17:55.810 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:55.810 "is_configured": false, 00:17:55.810 "data_offset": 0, 00:17:55.810 "data_size": 0 00:17:55.810 } 00:17:55.810 ] 00:17:55.810 }' 00:17:55.810 21:44:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:55.810 21:44:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:56.070 21:44:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:56.070 21:44:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.070 21:44:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:56.070 [2024-12-10 21:44:56.723314] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:56.070 [2024-12-10 21:44:56.723357] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:17:56.070 21:44:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.070 21:44:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:17:56.070 21:44:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.070 21:44:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:56.071 [2024-12-10 21:44:56.731302] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:56.071 [2024-12-10 21:44:56.731343] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:56.071 [2024-12-10 21:44:56.731352] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:56.071 [2024-12-10 21:44:56.731361] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:56.071 [2024-12-10 21:44:56.731368] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:56.071 [2024-12-10 21:44:56.731378] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:56.071 [2024-12-10 21:44:56.731384] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:56.071 [2024-12-10 21:44:56.731392] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:56.071 21:44:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.071 21:44:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:17:56.071 21:44:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.071 21:44:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:56.071 [2024-12-10 21:44:56.775260] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:56.071 BaseBdev1 00:17:56.071 21:44:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.071 21:44:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:17:56.071 21:44:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:17:56.071 21:44:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:56.071 21:44:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:17:56.071 21:44:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:56.071 21:44:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:56.071 21:44:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:56.071 21:44:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.071 21:44:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:56.071 21:44:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.071 21:44:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:56.071 21:44:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.071 21:44:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:56.071 [ 00:17:56.071 { 00:17:56.071 "name": "BaseBdev1", 00:17:56.071 "aliases": [ 00:17:56.071 "a160ed33-8904-4b45-a280-fd042b3fd75a" 00:17:56.071 ], 00:17:56.071 "product_name": "Malloc disk", 00:17:56.071 "block_size": 512, 00:17:56.071 "num_blocks": 65536, 00:17:56.071 "uuid": "a160ed33-8904-4b45-a280-fd042b3fd75a", 00:17:56.071 "assigned_rate_limits": { 00:17:56.071 "rw_ios_per_sec": 0, 00:17:56.071 "rw_mbytes_per_sec": 0, 00:17:56.071 "r_mbytes_per_sec": 0, 00:17:56.071 "w_mbytes_per_sec": 0 00:17:56.071 }, 00:17:56.071 "claimed": true, 00:17:56.071 "claim_type": "exclusive_write", 00:17:56.071 "zoned": false, 00:17:56.071 "supported_io_types": { 00:17:56.071 "read": true, 00:17:56.071 "write": true, 00:17:56.071 "unmap": true, 00:17:56.071 "flush": true, 00:17:56.071 "reset": true, 00:17:56.071 "nvme_admin": false, 00:17:56.071 "nvme_io": false, 00:17:56.071 "nvme_io_md": false, 00:17:56.071 "write_zeroes": true, 00:17:56.071 "zcopy": true, 00:17:56.071 "get_zone_info": false, 00:17:56.071 "zone_management": false, 00:17:56.071 "zone_append": false, 00:17:56.071 "compare": false, 00:17:56.071 "compare_and_write": false, 00:17:56.071 "abort": true, 00:17:56.071 "seek_hole": false, 00:17:56.071 "seek_data": false, 00:17:56.071 "copy": true, 00:17:56.071 "nvme_iov_md": false 00:17:56.071 }, 00:17:56.071 "memory_domains": [ 00:17:56.071 { 00:17:56.071 "dma_device_id": "system", 00:17:56.071 "dma_device_type": 1 00:17:56.071 }, 00:17:56.071 { 00:17:56.071 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:56.071 "dma_device_type": 2 00:17:56.071 } 00:17:56.071 ], 00:17:56.071 "driver_specific": {} 00:17:56.071 } 00:17:56.071 ] 00:17:56.071 21:44:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.071 21:44:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:17:56.071 21:44:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:56.071 21:44:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:56.071 21:44:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:56.071 21:44:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:56.071 21:44:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:56.071 21:44:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:56.071 21:44:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:56.071 21:44:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:56.071 21:44:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:56.071 21:44:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:56.071 21:44:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:56.071 21:44:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:56.071 21:44:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.071 21:44:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:56.071 21:44:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.071 21:44:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:56.071 "name": "Existed_Raid", 00:17:56.071 "uuid": "2a57ea60-5725-4cca-b1ce-e75ad55090a8", 00:17:56.071 "strip_size_kb": 64, 00:17:56.071 "state": "configuring", 00:17:56.071 "raid_level": "raid5f", 00:17:56.071 "superblock": true, 00:17:56.071 "num_base_bdevs": 4, 00:17:56.071 "num_base_bdevs_discovered": 1, 00:17:56.071 "num_base_bdevs_operational": 4, 00:17:56.071 "base_bdevs_list": [ 00:17:56.071 { 00:17:56.071 "name": "BaseBdev1", 00:17:56.071 "uuid": "a160ed33-8904-4b45-a280-fd042b3fd75a", 00:17:56.071 "is_configured": true, 00:17:56.071 "data_offset": 2048, 00:17:56.071 "data_size": 63488 00:17:56.071 }, 00:17:56.071 { 00:17:56.071 "name": "BaseBdev2", 00:17:56.071 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:56.071 "is_configured": false, 00:17:56.071 "data_offset": 0, 00:17:56.071 "data_size": 0 00:17:56.071 }, 00:17:56.071 { 00:17:56.071 "name": "BaseBdev3", 00:17:56.071 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:56.071 "is_configured": false, 00:17:56.071 "data_offset": 0, 00:17:56.071 "data_size": 0 00:17:56.071 }, 00:17:56.071 { 00:17:56.071 "name": "BaseBdev4", 00:17:56.071 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:56.071 "is_configured": false, 00:17:56.071 "data_offset": 0, 00:17:56.071 "data_size": 0 00:17:56.071 } 00:17:56.071 ] 00:17:56.071 }' 00:17:56.071 21:44:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:56.071 21:44:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:56.653 21:44:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:56.653 21:44:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.653 21:44:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:56.653 [2024-12-10 21:44:57.242541] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:56.654 [2024-12-10 21:44:57.242596] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:17:56.654 21:44:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.654 21:44:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:17:56.654 21:44:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.654 21:44:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:56.654 [2024-12-10 21:44:57.254595] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:56.654 [2024-12-10 21:44:57.256416] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:56.654 [2024-12-10 21:44:57.256466] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:56.654 [2024-12-10 21:44:57.256475] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:56.654 [2024-12-10 21:44:57.256485] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:56.654 [2024-12-10 21:44:57.256492] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:56.654 [2024-12-10 21:44:57.256500] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:56.654 21:44:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.654 21:44:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:17:56.654 21:44:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:56.654 21:44:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:56.654 21:44:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:56.654 21:44:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:56.654 21:44:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:56.654 21:44:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:56.654 21:44:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:56.654 21:44:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:56.654 21:44:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:56.654 21:44:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:56.654 21:44:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:56.654 21:44:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:56.654 21:44:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:56.654 21:44:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.654 21:44:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:56.654 21:44:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.654 21:44:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:56.654 "name": "Existed_Raid", 00:17:56.654 "uuid": "e3eb5beb-eaab-4e0c-b296-3b9b74b0ae10", 00:17:56.654 "strip_size_kb": 64, 00:17:56.654 "state": "configuring", 00:17:56.654 "raid_level": "raid5f", 00:17:56.654 "superblock": true, 00:17:56.654 "num_base_bdevs": 4, 00:17:56.654 "num_base_bdevs_discovered": 1, 00:17:56.654 "num_base_bdevs_operational": 4, 00:17:56.654 "base_bdevs_list": [ 00:17:56.654 { 00:17:56.654 "name": "BaseBdev1", 00:17:56.654 "uuid": "a160ed33-8904-4b45-a280-fd042b3fd75a", 00:17:56.654 "is_configured": true, 00:17:56.654 "data_offset": 2048, 00:17:56.654 "data_size": 63488 00:17:56.654 }, 00:17:56.654 { 00:17:56.654 "name": "BaseBdev2", 00:17:56.654 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:56.654 "is_configured": false, 00:17:56.654 "data_offset": 0, 00:17:56.654 "data_size": 0 00:17:56.654 }, 00:17:56.654 { 00:17:56.654 "name": "BaseBdev3", 00:17:56.654 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:56.654 "is_configured": false, 00:17:56.654 "data_offset": 0, 00:17:56.654 "data_size": 0 00:17:56.654 }, 00:17:56.654 { 00:17:56.654 "name": "BaseBdev4", 00:17:56.654 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:56.654 "is_configured": false, 00:17:56.654 "data_offset": 0, 00:17:56.654 "data_size": 0 00:17:56.654 } 00:17:56.654 ] 00:17:56.654 }' 00:17:56.654 21:44:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:56.654 21:44:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:56.914 21:44:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:17:56.914 21:44:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.914 21:44:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:56.914 [2024-12-10 21:44:57.679097] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:56.914 BaseBdev2 00:17:56.914 21:44:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.914 21:44:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:17:56.914 21:44:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:17:56.914 21:44:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:56.914 21:44:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:17:56.914 21:44:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:56.914 21:44:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:56.914 21:44:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:56.914 21:44:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.914 21:44:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:56.914 21:44:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.914 21:44:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:56.914 21:44:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.914 21:44:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:57.177 [ 00:17:57.177 { 00:17:57.177 "name": "BaseBdev2", 00:17:57.177 "aliases": [ 00:17:57.177 "5095c9be-030a-41c7-8575-dc27c806c9f8" 00:17:57.177 ], 00:17:57.177 "product_name": "Malloc disk", 00:17:57.177 "block_size": 512, 00:17:57.177 "num_blocks": 65536, 00:17:57.177 "uuid": "5095c9be-030a-41c7-8575-dc27c806c9f8", 00:17:57.177 "assigned_rate_limits": { 00:17:57.177 "rw_ios_per_sec": 0, 00:17:57.177 "rw_mbytes_per_sec": 0, 00:17:57.177 "r_mbytes_per_sec": 0, 00:17:57.177 "w_mbytes_per_sec": 0 00:17:57.177 }, 00:17:57.177 "claimed": true, 00:17:57.177 "claim_type": "exclusive_write", 00:17:57.177 "zoned": false, 00:17:57.177 "supported_io_types": { 00:17:57.177 "read": true, 00:17:57.177 "write": true, 00:17:57.177 "unmap": true, 00:17:57.177 "flush": true, 00:17:57.177 "reset": true, 00:17:57.177 "nvme_admin": false, 00:17:57.177 "nvme_io": false, 00:17:57.177 "nvme_io_md": false, 00:17:57.177 "write_zeroes": true, 00:17:57.177 "zcopy": true, 00:17:57.177 "get_zone_info": false, 00:17:57.177 "zone_management": false, 00:17:57.177 "zone_append": false, 00:17:57.177 "compare": false, 00:17:57.177 "compare_and_write": false, 00:17:57.177 "abort": true, 00:17:57.177 "seek_hole": false, 00:17:57.177 "seek_data": false, 00:17:57.177 "copy": true, 00:17:57.177 "nvme_iov_md": false 00:17:57.177 }, 00:17:57.177 "memory_domains": [ 00:17:57.177 { 00:17:57.177 "dma_device_id": "system", 00:17:57.177 "dma_device_type": 1 00:17:57.177 }, 00:17:57.177 { 00:17:57.177 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:57.177 "dma_device_type": 2 00:17:57.177 } 00:17:57.177 ], 00:17:57.177 "driver_specific": {} 00:17:57.177 } 00:17:57.177 ] 00:17:57.177 21:44:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.177 21:44:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:17:57.177 21:44:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:57.177 21:44:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:57.177 21:44:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:57.177 21:44:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:57.177 21:44:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:57.177 21:44:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:57.177 21:44:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:57.177 21:44:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:57.177 21:44:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:57.177 21:44:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:57.177 21:44:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:57.177 21:44:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:57.177 21:44:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:57.177 21:44:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:57.177 21:44:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.177 21:44:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:57.177 21:44:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.177 21:44:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:57.177 "name": "Existed_Raid", 00:17:57.177 "uuid": "e3eb5beb-eaab-4e0c-b296-3b9b74b0ae10", 00:17:57.177 "strip_size_kb": 64, 00:17:57.177 "state": "configuring", 00:17:57.177 "raid_level": "raid5f", 00:17:57.177 "superblock": true, 00:17:57.177 "num_base_bdevs": 4, 00:17:57.177 "num_base_bdevs_discovered": 2, 00:17:57.177 "num_base_bdevs_operational": 4, 00:17:57.177 "base_bdevs_list": [ 00:17:57.177 { 00:17:57.177 "name": "BaseBdev1", 00:17:57.177 "uuid": "a160ed33-8904-4b45-a280-fd042b3fd75a", 00:17:57.177 "is_configured": true, 00:17:57.177 "data_offset": 2048, 00:17:57.177 "data_size": 63488 00:17:57.177 }, 00:17:57.177 { 00:17:57.177 "name": "BaseBdev2", 00:17:57.177 "uuid": "5095c9be-030a-41c7-8575-dc27c806c9f8", 00:17:57.177 "is_configured": true, 00:17:57.177 "data_offset": 2048, 00:17:57.177 "data_size": 63488 00:17:57.177 }, 00:17:57.177 { 00:17:57.177 "name": "BaseBdev3", 00:17:57.177 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:57.177 "is_configured": false, 00:17:57.177 "data_offset": 0, 00:17:57.177 "data_size": 0 00:17:57.177 }, 00:17:57.177 { 00:17:57.177 "name": "BaseBdev4", 00:17:57.177 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:57.177 "is_configured": false, 00:17:57.177 "data_offset": 0, 00:17:57.177 "data_size": 0 00:17:57.177 } 00:17:57.177 ] 00:17:57.177 }' 00:17:57.177 21:44:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:57.177 21:44:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:57.438 21:44:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:17:57.438 21:44:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.438 21:44:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:57.438 [2024-12-10 21:44:58.202648] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:57.438 BaseBdev3 00:17:57.438 21:44:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.438 21:44:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:17:57.438 21:44:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:17:57.438 21:44:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:57.438 21:44:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:17:57.438 21:44:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:57.438 21:44:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:57.438 21:44:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:57.438 21:44:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.438 21:44:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:57.438 21:44:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.438 21:44:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:57.438 21:44:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.438 21:44:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:57.698 [ 00:17:57.698 { 00:17:57.698 "name": "BaseBdev3", 00:17:57.698 "aliases": [ 00:17:57.698 "ce6642ff-1b78-4f5a-9382-d5d8c0f9aae0" 00:17:57.698 ], 00:17:57.698 "product_name": "Malloc disk", 00:17:57.698 "block_size": 512, 00:17:57.698 "num_blocks": 65536, 00:17:57.698 "uuid": "ce6642ff-1b78-4f5a-9382-d5d8c0f9aae0", 00:17:57.698 "assigned_rate_limits": { 00:17:57.698 "rw_ios_per_sec": 0, 00:17:57.698 "rw_mbytes_per_sec": 0, 00:17:57.698 "r_mbytes_per_sec": 0, 00:17:57.698 "w_mbytes_per_sec": 0 00:17:57.698 }, 00:17:57.698 "claimed": true, 00:17:57.698 "claim_type": "exclusive_write", 00:17:57.698 "zoned": false, 00:17:57.698 "supported_io_types": { 00:17:57.698 "read": true, 00:17:57.698 "write": true, 00:17:57.698 "unmap": true, 00:17:57.698 "flush": true, 00:17:57.698 "reset": true, 00:17:57.698 "nvme_admin": false, 00:17:57.698 "nvme_io": false, 00:17:57.698 "nvme_io_md": false, 00:17:57.698 "write_zeroes": true, 00:17:57.698 "zcopy": true, 00:17:57.698 "get_zone_info": false, 00:17:57.698 "zone_management": false, 00:17:57.698 "zone_append": false, 00:17:57.698 "compare": false, 00:17:57.698 "compare_and_write": false, 00:17:57.698 "abort": true, 00:17:57.698 "seek_hole": false, 00:17:57.698 "seek_data": false, 00:17:57.698 "copy": true, 00:17:57.698 "nvme_iov_md": false 00:17:57.698 }, 00:17:57.698 "memory_domains": [ 00:17:57.698 { 00:17:57.698 "dma_device_id": "system", 00:17:57.698 "dma_device_type": 1 00:17:57.698 }, 00:17:57.698 { 00:17:57.698 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:57.698 "dma_device_type": 2 00:17:57.698 } 00:17:57.698 ], 00:17:57.698 "driver_specific": {} 00:17:57.698 } 00:17:57.698 ] 00:17:57.698 21:44:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.698 21:44:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:17:57.698 21:44:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:57.698 21:44:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:57.698 21:44:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:57.698 21:44:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:57.698 21:44:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:57.698 21:44:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:57.698 21:44:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:57.698 21:44:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:57.698 21:44:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:57.698 21:44:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:57.698 21:44:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:57.698 21:44:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:57.698 21:44:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:57.698 21:44:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.698 21:44:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:57.698 21:44:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:57.698 21:44:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.698 21:44:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:57.698 "name": "Existed_Raid", 00:17:57.698 "uuid": "e3eb5beb-eaab-4e0c-b296-3b9b74b0ae10", 00:17:57.698 "strip_size_kb": 64, 00:17:57.698 "state": "configuring", 00:17:57.698 "raid_level": "raid5f", 00:17:57.698 "superblock": true, 00:17:57.698 "num_base_bdevs": 4, 00:17:57.698 "num_base_bdevs_discovered": 3, 00:17:57.698 "num_base_bdevs_operational": 4, 00:17:57.699 "base_bdevs_list": [ 00:17:57.699 { 00:17:57.699 "name": "BaseBdev1", 00:17:57.699 "uuid": "a160ed33-8904-4b45-a280-fd042b3fd75a", 00:17:57.699 "is_configured": true, 00:17:57.699 "data_offset": 2048, 00:17:57.699 "data_size": 63488 00:17:57.699 }, 00:17:57.699 { 00:17:57.699 "name": "BaseBdev2", 00:17:57.699 "uuid": "5095c9be-030a-41c7-8575-dc27c806c9f8", 00:17:57.699 "is_configured": true, 00:17:57.699 "data_offset": 2048, 00:17:57.699 "data_size": 63488 00:17:57.699 }, 00:17:57.699 { 00:17:57.699 "name": "BaseBdev3", 00:17:57.699 "uuid": "ce6642ff-1b78-4f5a-9382-d5d8c0f9aae0", 00:17:57.699 "is_configured": true, 00:17:57.699 "data_offset": 2048, 00:17:57.699 "data_size": 63488 00:17:57.699 }, 00:17:57.699 { 00:17:57.699 "name": "BaseBdev4", 00:17:57.699 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:57.699 "is_configured": false, 00:17:57.699 "data_offset": 0, 00:17:57.699 "data_size": 0 00:17:57.699 } 00:17:57.699 ] 00:17:57.699 }' 00:17:57.699 21:44:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:57.699 21:44:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:57.959 21:44:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:17:57.959 21:44:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.959 21:44:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:57.959 [2024-12-10 21:44:58.690844] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:57.959 [2024-12-10 21:44:58.691138] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:57.959 [2024-12-10 21:44:58.691153] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:57.959 [2024-12-10 21:44:58.691429] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:17:57.959 BaseBdev4 00:17:57.959 21:44:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.959 21:44:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:17:57.959 21:44:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:17:57.959 21:44:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:57.959 21:44:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:17:57.959 21:44:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:57.959 21:44:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:57.959 21:44:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:57.959 21:44:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.959 21:44:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:57.959 [2024-12-10 21:44:58.698966] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:57.959 [2024-12-10 21:44:58.698995] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:17:57.959 [2024-12-10 21:44:58.699251] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:57.959 21:44:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.959 21:44:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:17:57.959 21:44:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.959 21:44:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:57.959 [ 00:17:57.959 { 00:17:57.959 "name": "BaseBdev4", 00:17:57.959 "aliases": [ 00:17:57.959 "60bb92b3-cb03-4a45-9215-17d7ff713f71" 00:17:57.959 ], 00:17:57.959 "product_name": "Malloc disk", 00:17:57.959 "block_size": 512, 00:17:57.959 "num_blocks": 65536, 00:17:57.959 "uuid": "60bb92b3-cb03-4a45-9215-17d7ff713f71", 00:17:57.959 "assigned_rate_limits": { 00:17:57.959 "rw_ios_per_sec": 0, 00:17:57.959 "rw_mbytes_per_sec": 0, 00:17:57.959 "r_mbytes_per_sec": 0, 00:17:57.959 "w_mbytes_per_sec": 0 00:17:57.959 }, 00:17:57.959 "claimed": true, 00:17:57.959 "claim_type": "exclusive_write", 00:17:57.959 "zoned": false, 00:17:57.959 "supported_io_types": { 00:17:57.959 "read": true, 00:17:57.959 "write": true, 00:17:57.959 "unmap": true, 00:17:57.959 "flush": true, 00:17:57.959 "reset": true, 00:17:57.959 "nvme_admin": false, 00:17:57.959 "nvme_io": false, 00:17:57.959 "nvme_io_md": false, 00:17:57.959 "write_zeroes": true, 00:17:57.959 "zcopy": true, 00:17:57.959 "get_zone_info": false, 00:17:57.959 "zone_management": false, 00:17:57.959 "zone_append": false, 00:17:57.959 "compare": false, 00:17:57.959 "compare_and_write": false, 00:17:57.959 "abort": true, 00:17:57.959 "seek_hole": false, 00:17:57.959 "seek_data": false, 00:17:57.959 "copy": true, 00:17:57.959 "nvme_iov_md": false 00:17:57.959 }, 00:17:57.959 "memory_domains": [ 00:17:57.959 { 00:17:57.959 "dma_device_id": "system", 00:17:57.959 "dma_device_type": 1 00:17:57.959 }, 00:17:57.959 { 00:17:57.959 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:57.959 "dma_device_type": 2 00:17:57.959 } 00:17:57.959 ], 00:17:57.959 "driver_specific": {} 00:17:57.959 } 00:17:57.959 ] 00:17:57.959 21:44:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.959 21:44:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:17:57.959 21:44:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:57.959 21:44:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:57.959 21:44:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:17:57.959 21:44:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:57.959 21:44:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:57.959 21:44:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:57.959 21:44:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:57.959 21:44:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:57.959 21:44:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:57.959 21:44:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:57.959 21:44:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:57.959 21:44:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:57.959 21:44:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:57.959 21:44:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:57.959 21:44:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.959 21:44:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:58.219 21:44:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.219 21:44:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:58.219 "name": "Existed_Raid", 00:17:58.219 "uuid": "e3eb5beb-eaab-4e0c-b296-3b9b74b0ae10", 00:17:58.219 "strip_size_kb": 64, 00:17:58.219 "state": "online", 00:17:58.219 "raid_level": "raid5f", 00:17:58.219 "superblock": true, 00:17:58.219 "num_base_bdevs": 4, 00:17:58.219 "num_base_bdevs_discovered": 4, 00:17:58.219 "num_base_bdevs_operational": 4, 00:17:58.219 "base_bdevs_list": [ 00:17:58.219 { 00:17:58.219 "name": "BaseBdev1", 00:17:58.219 "uuid": "a160ed33-8904-4b45-a280-fd042b3fd75a", 00:17:58.219 "is_configured": true, 00:17:58.219 "data_offset": 2048, 00:17:58.219 "data_size": 63488 00:17:58.219 }, 00:17:58.219 { 00:17:58.219 "name": "BaseBdev2", 00:17:58.219 "uuid": "5095c9be-030a-41c7-8575-dc27c806c9f8", 00:17:58.219 "is_configured": true, 00:17:58.219 "data_offset": 2048, 00:17:58.219 "data_size": 63488 00:17:58.219 }, 00:17:58.219 { 00:17:58.219 "name": "BaseBdev3", 00:17:58.219 "uuid": "ce6642ff-1b78-4f5a-9382-d5d8c0f9aae0", 00:17:58.219 "is_configured": true, 00:17:58.219 "data_offset": 2048, 00:17:58.219 "data_size": 63488 00:17:58.219 }, 00:17:58.219 { 00:17:58.219 "name": "BaseBdev4", 00:17:58.219 "uuid": "60bb92b3-cb03-4a45-9215-17d7ff713f71", 00:17:58.219 "is_configured": true, 00:17:58.219 "data_offset": 2048, 00:17:58.219 "data_size": 63488 00:17:58.219 } 00:17:58.219 ] 00:17:58.219 }' 00:17:58.219 21:44:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:58.219 21:44:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:58.480 21:44:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:17:58.480 21:44:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:17:58.480 21:44:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:58.480 21:44:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:58.480 21:44:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:17:58.480 21:44:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:58.480 21:44:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:58.480 21:44:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:17:58.480 21:44:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.480 21:44:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:58.480 [2024-12-10 21:44:59.191073] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:58.480 21:44:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.480 21:44:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:58.480 "name": "Existed_Raid", 00:17:58.480 "aliases": [ 00:17:58.480 "e3eb5beb-eaab-4e0c-b296-3b9b74b0ae10" 00:17:58.480 ], 00:17:58.480 "product_name": "Raid Volume", 00:17:58.480 "block_size": 512, 00:17:58.480 "num_blocks": 190464, 00:17:58.480 "uuid": "e3eb5beb-eaab-4e0c-b296-3b9b74b0ae10", 00:17:58.480 "assigned_rate_limits": { 00:17:58.480 "rw_ios_per_sec": 0, 00:17:58.480 "rw_mbytes_per_sec": 0, 00:17:58.480 "r_mbytes_per_sec": 0, 00:17:58.480 "w_mbytes_per_sec": 0 00:17:58.480 }, 00:17:58.480 "claimed": false, 00:17:58.480 "zoned": false, 00:17:58.480 "supported_io_types": { 00:17:58.480 "read": true, 00:17:58.480 "write": true, 00:17:58.480 "unmap": false, 00:17:58.480 "flush": false, 00:17:58.480 "reset": true, 00:17:58.480 "nvme_admin": false, 00:17:58.480 "nvme_io": false, 00:17:58.480 "nvme_io_md": false, 00:17:58.480 "write_zeroes": true, 00:17:58.480 "zcopy": false, 00:17:58.480 "get_zone_info": false, 00:17:58.480 "zone_management": false, 00:17:58.480 "zone_append": false, 00:17:58.480 "compare": false, 00:17:58.480 "compare_and_write": false, 00:17:58.480 "abort": false, 00:17:58.480 "seek_hole": false, 00:17:58.480 "seek_data": false, 00:17:58.480 "copy": false, 00:17:58.480 "nvme_iov_md": false 00:17:58.480 }, 00:17:58.480 "driver_specific": { 00:17:58.480 "raid": { 00:17:58.480 "uuid": "e3eb5beb-eaab-4e0c-b296-3b9b74b0ae10", 00:17:58.480 "strip_size_kb": 64, 00:17:58.480 "state": "online", 00:17:58.480 "raid_level": "raid5f", 00:17:58.480 "superblock": true, 00:17:58.480 "num_base_bdevs": 4, 00:17:58.480 "num_base_bdevs_discovered": 4, 00:17:58.480 "num_base_bdevs_operational": 4, 00:17:58.480 "base_bdevs_list": [ 00:17:58.480 { 00:17:58.480 "name": "BaseBdev1", 00:17:58.480 "uuid": "a160ed33-8904-4b45-a280-fd042b3fd75a", 00:17:58.480 "is_configured": true, 00:17:58.480 "data_offset": 2048, 00:17:58.480 "data_size": 63488 00:17:58.480 }, 00:17:58.480 { 00:17:58.480 "name": "BaseBdev2", 00:17:58.480 "uuid": "5095c9be-030a-41c7-8575-dc27c806c9f8", 00:17:58.480 "is_configured": true, 00:17:58.480 "data_offset": 2048, 00:17:58.480 "data_size": 63488 00:17:58.480 }, 00:17:58.480 { 00:17:58.480 "name": "BaseBdev3", 00:17:58.480 "uuid": "ce6642ff-1b78-4f5a-9382-d5d8c0f9aae0", 00:17:58.480 "is_configured": true, 00:17:58.480 "data_offset": 2048, 00:17:58.480 "data_size": 63488 00:17:58.480 }, 00:17:58.480 { 00:17:58.480 "name": "BaseBdev4", 00:17:58.480 "uuid": "60bb92b3-cb03-4a45-9215-17d7ff713f71", 00:17:58.480 "is_configured": true, 00:17:58.480 "data_offset": 2048, 00:17:58.480 "data_size": 63488 00:17:58.480 } 00:17:58.480 ] 00:17:58.480 } 00:17:58.480 } 00:17:58.480 }' 00:17:58.480 21:44:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:58.740 21:44:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:17:58.740 BaseBdev2 00:17:58.740 BaseBdev3 00:17:58.740 BaseBdev4' 00:17:58.740 21:44:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:58.740 21:44:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:17:58.740 21:44:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:58.740 21:44:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:17:58.740 21:44:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:58.740 21:44:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.740 21:44:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:58.740 21:44:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.740 21:44:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:58.740 21:44:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:58.740 21:44:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:58.740 21:44:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:17:58.740 21:44:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.740 21:44:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:58.740 21:44:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:58.740 21:44:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.740 21:44:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:58.740 21:44:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:58.740 21:44:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:58.740 21:44:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:17:58.740 21:44:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:58.740 21:44:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.740 21:44:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:58.740 21:44:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.740 21:44:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:58.740 21:44:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:58.740 21:44:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:58.740 21:44:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:17:58.740 21:44:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:58.740 21:44:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.740 21:44:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:58.740 21:44:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.740 21:44:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:58.740 21:44:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:58.740 21:44:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:17:58.740 21:44:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.740 21:44:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:58.740 [2024-12-10 21:44:59.506370] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:59.000 21:44:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.000 21:44:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:17:59.000 21:44:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:17:59.000 21:44:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:59.000 21:44:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:17:59.000 21:44:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:17:59.000 21:44:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:17:59.000 21:44:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:59.000 21:44:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:59.000 21:44:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:59.000 21:44:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:59.000 21:44:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:59.000 21:44:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:59.000 21:44:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:59.000 21:44:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:59.000 21:44:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:59.000 21:44:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:59.000 21:44:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:59.000 21:44:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.000 21:44:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:59.000 21:44:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.000 21:44:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:59.000 "name": "Existed_Raid", 00:17:59.000 "uuid": "e3eb5beb-eaab-4e0c-b296-3b9b74b0ae10", 00:17:59.000 "strip_size_kb": 64, 00:17:59.000 "state": "online", 00:17:59.000 "raid_level": "raid5f", 00:17:59.000 "superblock": true, 00:17:59.000 "num_base_bdevs": 4, 00:17:59.000 "num_base_bdevs_discovered": 3, 00:17:59.000 "num_base_bdevs_operational": 3, 00:17:59.000 "base_bdevs_list": [ 00:17:59.000 { 00:17:59.000 "name": null, 00:17:59.000 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:59.000 "is_configured": false, 00:17:59.000 "data_offset": 0, 00:17:59.000 "data_size": 63488 00:17:59.000 }, 00:17:59.000 { 00:17:59.000 "name": "BaseBdev2", 00:17:59.000 "uuid": "5095c9be-030a-41c7-8575-dc27c806c9f8", 00:17:59.000 "is_configured": true, 00:17:59.000 "data_offset": 2048, 00:17:59.000 "data_size": 63488 00:17:59.000 }, 00:17:59.000 { 00:17:59.000 "name": "BaseBdev3", 00:17:59.000 "uuid": "ce6642ff-1b78-4f5a-9382-d5d8c0f9aae0", 00:17:59.000 "is_configured": true, 00:17:59.000 "data_offset": 2048, 00:17:59.000 "data_size": 63488 00:17:59.000 }, 00:17:59.000 { 00:17:59.000 "name": "BaseBdev4", 00:17:59.000 "uuid": "60bb92b3-cb03-4a45-9215-17d7ff713f71", 00:17:59.000 "is_configured": true, 00:17:59.001 "data_offset": 2048, 00:17:59.001 "data_size": 63488 00:17:59.001 } 00:17:59.001 ] 00:17:59.001 }' 00:17:59.001 21:44:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:59.001 21:44:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:59.260 21:44:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:17:59.260 21:44:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:59.260 21:44:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:59.260 21:44:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:59.260 21:44:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.260 21:44:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:59.260 21:45:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.260 21:45:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:59.260 21:45:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:59.260 21:45:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:17:59.260 21:45:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.260 21:45:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:59.260 [2024-12-10 21:45:00.029747] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:59.260 [2024-12-10 21:45:00.029926] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:59.520 [2024-12-10 21:45:00.121459] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:59.520 21:45:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.520 21:45:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:59.520 21:45:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:59.520 21:45:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:59.520 21:45:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:59.520 21:45:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.520 21:45:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:59.520 21:45:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.520 21:45:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:59.520 21:45:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:59.520 21:45:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:17:59.520 21:45:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.520 21:45:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:59.520 [2024-12-10 21:45:00.177360] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:59.520 21:45:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.520 21:45:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:59.520 21:45:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:59.520 21:45:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:59.520 21:45:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.520 21:45:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:59.520 21:45:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:59.520 21:45:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.779 21:45:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:59.779 21:45:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:59.779 21:45:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:17:59.779 21:45:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.779 21:45:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:59.779 [2024-12-10 21:45:00.334393] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:17:59.780 [2024-12-10 21:45:00.334465] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:17:59.780 21:45:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.780 21:45:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:59.780 21:45:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:59.780 21:45:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:59.780 21:45:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:17:59.780 21:45:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.780 21:45:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:59.780 21:45:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.780 21:45:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:17:59.780 21:45:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:17:59.780 21:45:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:17:59.780 21:45:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:17:59.780 21:45:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:59.780 21:45:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:17:59.780 21:45:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.780 21:45:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:59.780 BaseBdev2 00:17:59.780 21:45:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.780 21:45:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:17:59.780 21:45:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:17:59.780 21:45:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:59.780 21:45:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:17:59.780 21:45:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:59.780 21:45:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:59.780 21:45:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:59.780 21:45:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.780 21:45:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:59.780 21:45:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.780 21:45:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:59.780 21:45:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.780 21:45:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:00.040 [ 00:18:00.040 { 00:18:00.040 "name": "BaseBdev2", 00:18:00.040 "aliases": [ 00:18:00.040 "ff398c5a-271b-4291-ac5d-344e2a56ade5" 00:18:00.040 ], 00:18:00.040 "product_name": "Malloc disk", 00:18:00.040 "block_size": 512, 00:18:00.040 "num_blocks": 65536, 00:18:00.040 "uuid": "ff398c5a-271b-4291-ac5d-344e2a56ade5", 00:18:00.040 "assigned_rate_limits": { 00:18:00.040 "rw_ios_per_sec": 0, 00:18:00.040 "rw_mbytes_per_sec": 0, 00:18:00.040 "r_mbytes_per_sec": 0, 00:18:00.040 "w_mbytes_per_sec": 0 00:18:00.040 }, 00:18:00.040 "claimed": false, 00:18:00.040 "zoned": false, 00:18:00.040 "supported_io_types": { 00:18:00.040 "read": true, 00:18:00.040 "write": true, 00:18:00.040 "unmap": true, 00:18:00.040 "flush": true, 00:18:00.040 "reset": true, 00:18:00.040 "nvme_admin": false, 00:18:00.040 "nvme_io": false, 00:18:00.040 "nvme_io_md": false, 00:18:00.040 "write_zeroes": true, 00:18:00.040 "zcopy": true, 00:18:00.040 "get_zone_info": false, 00:18:00.040 "zone_management": false, 00:18:00.040 "zone_append": false, 00:18:00.040 "compare": false, 00:18:00.040 "compare_and_write": false, 00:18:00.040 "abort": true, 00:18:00.040 "seek_hole": false, 00:18:00.040 "seek_data": false, 00:18:00.040 "copy": true, 00:18:00.040 "nvme_iov_md": false 00:18:00.040 }, 00:18:00.040 "memory_domains": [ 00:18:00.040 { 00:18:00.040 "dma_device_id": "system", 00:18:00.040 "dma_device_type": 1 00:18:00.040 }, 00:18:00.040 { 00:18:00.040 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:00.040 "dma_device_type": 2 00:18:00.040 } 00:18:00.040 ], 00:18:00.040 "driver_specific": {} 00:18:00.040 } 00:18:00.040 ] 00:18:00.040 21:45:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.040 21:45:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:18:00.040 21:45:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:18:00.040 21:45:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:18:00.040 21:45:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:18:00.040 21:45:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.040 21:45:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:00.040 BaseBdev3 00:18:00.040 21:45:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.040 21:45:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:18:00.040 21:45:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:18:00.040 21:45:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:00.040 21:45:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:18:00.040 21:45:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:00.040 21:45:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:00.040 21:45:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:00.040 21:45:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.040 21:45:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:00.040 21:45:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.040 21:45:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:00.040 21:45:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.040 21:45:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:00.040 [ 00:18:00.040 { 00:18:00.040 "name": "BaseBdev3", 00:18:00.040 "aliases": [ 00:18:00.040 "3c5341ec-1d86-4e0a-9bba-b0ce3180f5e2" 00:18:00.040 ], 00:18:00.041 "product_name": "Malloc disk", 00:18:00.041 "block_size": 512, 00:18:00.041 "num_blocks": 65536, 00:18:00.041 "uuid": "3c5341ec-1d86-4e0a-9bba-b0ce3180f5e2", 00:18:00.041 "assigned_rate_limits": { 00:18:00.041 "rw_ios_per_sec": 0, 00:18:00.041 "rw_mbytes_per_sec": 0, 00:18:00.041 "r_mbytes_per_sec": 0, 00:18:00.041 "w_mbytes_per_sec": 0 00:18:00.041 }, 00:18:00.041 "claimed": false, 00:18:00.041 "zoned": false, 00:18:00.041 "supported_io_types": { 00:18:00.041 "read": true, 00:18:00.041 "write": true, 00:18:00.041 "unmap": true, 00:18:00.041 "flush": true, 00:18:00.041 "reset": true, 00:18:00.041 "nvme_admin": false, 00:18:00.041 "nvme_io": false, 00:18:00.041 "nvme_io_md": false, 00:18:00.041 "write_zeroes": true, 00:18:00.041 "zcopy": true, 00:18:00.041 "get_zone_info": false, 00:18:00.041 "zone_management": false, 00:18:00.041 "zone_append": false, 00:18:00.041 "compare": false, 00:18:00.041 "compare_and_write": false, 00:18:00.041 "abort": true, 00:18:00.041 "seek_hole": false, 00:18:00.041 "seek_data": false, 00:18:00.041 "copy": true, 00:18:00.041 "nvme_iov_md": false 00:18:00.041 }, 00:18:00.041 "memory_domains": [ 00:18:00.041 { 00:18:00.041 "dma_device_id": "system", 00:18:00.041 "dma_device_type": 1 00:18:00.041 }, 00:18:00.041 { 00:18:00.041 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:00.041 "dma_device_type": 2 00:18:00.041 } 00:18:00.041 ], 00:18:00.041 "driver_specific": {} 00:18:00.041 } 00:18:00.041 ] 00:18:00.041 21:45:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.041 21:45:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:18:00.041 21:45:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:18:00.041 21:45:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:18:00.041 21:45:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:18:00.041 21:45:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.041 21:45:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:00.041 BaseBdev4 00:18:00.041 21:45:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.041 21:45:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:18:00.041 21:45:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:18:00.041 21:45:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:00.041 21:45:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:18:00.041 21:45:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:00.041 21:45:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:00.041 21:45:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:00.041 21:45:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.041 21:45:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:00.041 21:45:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.041 21:45:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:18:00.041 21:45:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.041 21:45:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:00.041 [ 00:18:00.041 { 00:18:00.041 "name": "BaseBdev4", 00:18:00.041 "aliases": [ 00:18:00.041 "f08785cc-7e70-437b-abd9-7a2a3f952aeb" 00:18:00.041 ], 00:18:00.041 "product_name": "Malloc disk", 00:18:00.041 "block_size": 512, 00:18:00.041 "num_blocks": 65536, 00:18:00.041 "uuid": "f08785cc-7e70-437b-abd9-7a2a3f952aeb", 00:18:00.041 "assigned_rate_limits": { 00:18:00.041 "rw_ios_per_sec": 0, 00:18:00.041 "rw_mbytes_per_sec": 0, 00:18:00.041 "r_mbytes_per_sec": 0, 00:18:00.041 "w_mbytes_per_sec": 0 00:18:00.041 }, 00:18:00.041 "claimed": false, 00:18:00.041 "zoned": false, 00:18:00.041 "supported_io_types": { 00:18:00.041 "read": true, 00:18:00.041 "write": true, 00:18:00.041 "unmap": true, 00:18:00.041 "flush": true, 00:18:00.041 "reset": true, 00:18:00.041 "nvme_admin": false, 00:18:00.041 "nvme_io": false, 00:18:00.041 "nvme_io_md": false, 00:18:00.041 "write_zeroes": true, 00:18:00.041 "zcopy": true, 00:18:00.041 "get_zone_info": false, 00:18:00.041 "zone_management": false, 00:18:00.041 "zone_append": false, 00:18:00.041 "compare": false, 00:18:00.041 "compare_and_write": false, 00:18:00.041 "abort": true, 00:18:00.041 "seek_hole": false, 00:18:00.041 "seek_data": false, 00:18:00.041 "copy": true, 00:18:00.041 "nvme_iov_md": false 00:18:00.041 }, 00:18:00.041 "memory_domains": [ 00:18:00.041 { 00:18:00.041 "dma_device_id": "system", 00:18:00.041 "dma_device_type": 1 00:18:00.041 }, 00:18:00.041 { 00:18:00.041 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:00.041 "dma_device_type": 2 00:18:00.041 } 00:18:00.041 ], 00:18:00.041 "driver_specific": {} 00:18:00.041 } 00:18:00.041 ] 00:18:00.041 21:45:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.041 21:45:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:18:00.041 21:45:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:18:00.041 21:45:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:18:00.041 21:45:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:18:00.041 21:45:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.041 21:45:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:00.041 [2024-12-10 21:45:00.738103] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:00.041 [2024-12-10 21:45:00.738146] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:00.041 [2024-12-10 21:45:00.738183] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:00.041 [2024-12-10 21:45:00.739915] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:00.041 [2024-12-10 21:45:00.739971] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:00.041 21:45:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.041 21:45:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:00.041 21:45:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:00.041 21:45:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:00.041 21:45:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:00.041 21:45:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:00.041 21:45:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:00.041 21:45:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:00.041 21:45:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:00.041 21:45:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:00.041 21:45:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:00.041 21:45:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:00.041 21:45:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.041 21:45:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:00.041 21:45:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:00.041 21:45:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.041 21:45:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:00.041 "name": "Existed_Raid", 00:18:00.041 "uuid": "37f3f073-28da-463a-9a65-f9ecffee03b4", 00:18:00.041 "strip_size_kb": 64, 00:18:00.041 "state": "configuring", 00:18:00.041 "raid_level": "raid5f", 00:18:00.041 "superblock": true, 00:18:00.041 "num_base_bdevs": 4, 00:18:00.041 "num_base_bdevs_discovered": 3, 00:18:00.041 "num_base_bdevs_operational": 4, 00:18:00.041 "base_bdevs_list": [ 00:18:00.041 { 00:18:00.041 "name": "BaseBdev1", 00:18:00.041 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:00.041 "is_configured": false, 00:18:00.041 "data_offset": 0, 00:18:00.041 "data_size": 0 00:18:00.041 }, 00:18:00.041 { 00:18:00.041 "name": "BaseBdev2", 00:18:00.041 "uuid": "ff398c5a-271b-4291-ac5d-344e2a56ade5", 00:18:00.041 "is_configured": true, 00:18:00.041 "data_offset": 2048, 00:18:00.041 "data_size": 63488 00:18:00.041 }, 00:18:00.041 { 00:18:00.041 "name": "BaseBdev3", 00:18:00.041 "uuid": "3c5341ec-1d86-4e0a-9bba-b0ce3180f5e2", 00:18:00.041 "is_configured": true, 00:18:00.041 "data_offset": 2048, 00:18:00.041 "data_size": 63488 00:18:00.041 }, 00:18:00.041 { 00:18:00.041 "name": "BaseBdev4", 00:18:00.041 "uuid": "f08785cc-7e70-437b-abd9-7a2a3f952aeb", 00:18:00.041 "is_configured": true, 00:18:00.041 "data_offset": 2048, 00:18:00.041 "data_size": 63488 00:18:00.041 } 00:18:00.041 ] 00:18:00.041 }' 00:18:00.041 21:45:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:00.041 21:45:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:00.610 21:45:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:18:00.610 21:45:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.610 21:45:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:00.610 [2024-12-10 21:45:01.217342] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:00.610 21:45:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.611 21:45:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:00.611 21:45:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:00.611 21:45:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:00.611 21:45:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:00.611 21:45:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:00.611 21:45:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:00.611 21:45:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:00.611 21:45:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:00.611 21:45:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:00.611 21:45:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:00.611 21:45:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:00.611 21:45:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:00.611 21:45:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.611 21:45:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:00.611 21:45:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.611 21:45:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:00.611 "name": "Existed_Raid", 00:18:00.611 "uuid": "37f3f073-28da-463a-9a65-f9ecffee03b4", 00:18:00.611 "strip_size_kb": 64, 00:18:00.611 "state": "configuring", 00:18:00.611 "raid_level": "raid5f", 00:18:00.611 "superblock": true, 00:18:00.611 "num_base_bdevs": 4, 00:18:00.611 "num_base_bdevs_discovered": 2, 00:18:00.611 "num_base_bdevs_operational": 4, 00:18:00.611 "base_bdevs_list": [ 00:18:00.611 { 00:18:00.611 "name": "BaseBdev1", 00:18:00.611 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:00.611 "is_configured": false, 00:18:00.611 "data_offset": 0, 00:18:00.611 "data_size": 0 00:18:00.611 }, 00:18:00.611 { 00:18:00.611 "name": null, 00:18:00.611 "uuid": "ff398c5a-271b-4291-ac5d-344e2a56ade5", 00:18:00.611 "is_configured": false, 00:18:00.611 "data_offset": 0, 00:18:00.611 "data_size": 63488 00:18:00.611 }, 00:18:00.611 { 00:18:00.611 "name": "BaseBdev3", 00:18:00.611 "uuid": "3c5341ec-1d86-4e0a-9bba-b0ce3180f5e2", 00:18:00.611 "is_configured": true, 00:18:00.611 "data_offset": 2048, 00:18:00.611 "data_size": 63488 00:18:00.611 }, 00:18:00.611 { 00:18:00.611 "name": "BaseBdev4", 00:18:00.611 "uuid": "f08785cc-7e70-437b-abd9-7a2a3f952aeb", 00:18:00.611 "is_configured": true, 00:18:00.611 "data_offset": 2048, 00:18:00.611 "data_size": 63488 00:18:00.611 } 00:18:00.611 ] 00:18:00.611 }' 00:18:00.611 21:45:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:00.611 21:45:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:01.180 21:45:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:18:01.180 21:45:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:01.180 21:45:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.180 21:45:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:01.180 21:45:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.180 21:45:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:18:01.180 21:45:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:18:01.180 21:45:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.180 21:45:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:01.180 [2024-12-10 21:45:01.745300] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:01.180 BaseBdev1 00:18:01.180 21:45:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.180 21:45:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:18:01.180 21:45:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:18:01.180 21:45:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:01.180 21:45:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:18:01.180 21:45:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:01.180 21:45:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:01.180 21:45:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:01.180 21:45:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.180 21:45:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:01.180 21:45:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.180 21:45:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:01.180 21:45:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.180 21:45:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:01.180 [ 00:18:01.180 { 00:18:01.180 "name": "BaseBdev1", 00:18:01.180 "aliases": [ 00:18:01.180 "7ac8d6c6-887b-4977-b040-a802651f5971" 00:18:01.180 ], 00:18:01.180 "product_name": "Malloc disk", 00:18:01.180 "block_size": 512, 00:18:01.180 "num_blocks": 65536, 00:18:01.180 "uuid": "7ac8d6c6-887b-4977-b040-a802651f5971", 00:18:01.180 "assigned_rate_limits": { 00:18:01.180 "rw_ios_per_sec": 0, 00:18:01.180 "rw_mbytes_per_sec": 0, 00:18:01.180 "r_mbytes_per_sec": 0, 00:18:01.180 "w_mbytes_per_sec": 0 00:18:01.180 }, 00:18:01.180 "claimed": true, 00:18:01.180 "claim_type": "exclusive_write", 00:18:01.180 "zoned": false, 00:18:01.180 "supported_io_types": { 00:18:01.180 "read": true, 00:18:01.180 "write": true, 00:18:01.180 "unmap": true, 00:18:01.180 "flush": true, 00:18:01.180 "reset": true, 00:18:01.180 "nvme_admin": false, 00:18:01.180 "nvme_io": false, 00:18:01.180 "nvme_io_md": false, 00:18:01.180 "write_zeroes": true, 00:18:01.180 "zcopy": true, 00:18:01.180 "get_zone_info": false, 00:18:01.180 "zone_management": false, 00:18:01.180 "zone_append": false, 00:18:01.180 "compare": false, 00:18:01.180 "compare_and_write": false, 00:18:01.180 "abort": true, 00:18:01.180 "seek_hole": false, 00:18:01.180 "seek_data": false, 00:18:01.180 "copy": true, 00:18:01.180 "nvme_iov_md": false 00:18:01.180 }, 00:18:01.180 "memory_domains": [ 00:18:01.180 { 00:18:01.180 "dma_device_id": "system", 00:18:01.181 "dma_device_type": 1 00:18:01.181 }, 00:18:01.181 { 00:18:01.181 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:01.181 "dma_device_type": 2 00:18:01.181 } 00:18:01.181 ], 00:18:01.181 "driver_specific": {} 00:18:01.181 } 00:18:01.181 ] 00:18:01.181 21:45:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.181 21:45:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:18:01.181 21:45:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:01.181 21:45:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:01.181 21:45:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:01.181 21:45:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:01.181 21:45:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:01.181 21:45:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:01.181 21:45:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:01.181 21:45:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:01.181 21:45:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:01.181 21:45:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:01.181 21:45:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:01.181 21:45:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:01.181 21:45:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.181 21:45:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:01.181 21:45:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.181 21:45:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:01.181 "name": "Existed_Raid", 00:18:01.181 "uuid": "37f3f073-28da-463a-9a65-f9ecffee03b4", 00:18:01.181 "strip_size_kb": 64, 00:18:01.181 "state": "configuring", 00:18:01.181 "raid_level": "raid5f", 00:18:01.181 "superblock": true, 00:18:01.181 "num_base_bdevs": 4, 00:18:01.181 "num_base_bdevs_discovered": 3, 00:18:01.181 "num_base_bdevs_operational": 4, 00:18:01.181 "base_bdevs_list": [ 00:18:01.181 { 00:18:01.181 "name": "BaseBdev1", 00:18:01.181 "uuid": "7ac8d6c6-887b-4977-b040-a802651f5971", 00:18:01.181 "is_configured": true, 00:18:01.181 "data_offset": 2048, 00:18:01.181 "data_size": 63488 00:18:01.181 }, 00:18:01.181 { 00:18:01.181 "name": null, 00:18:01.181 "uuid": "ff398c5a-271b-4291-ac5d-344e2a56ade5", 00:18:01.181 "is_configured": false, 00:18:01.181 "data_offset": 0, 00:18:01.181 "data_size": 63488 00:18:01.181 }, 00:18:01.181 { 00:18:01.181 "name": "BaseBdev3", 00:18:01.181 "uuid": "3c5341ec-1d86-4e0a-9bba-b0ce3180f5e2", 00:18:01.181 "is_configured": true, 00:18:01.181 "data_offset": 2048, 00:18:01.181 "data_size": 63488 00:18:01.181 }, 00:18:01.181 { 00:18:01.181 "name": "BaseBdev4", 00:18:01.181 "uuid": "f08785cc-7e70-437b-abd9-7a2a3f952aeb", 00:18:01.181 "is_configured": true, 00:18:01.181 "data_offset": 2048, 00:18:01.181 "data_size": 63488 00:18:01.181 } 00:18:01.181 ] 00:18:01.181 }' 00:18:01.181 21:45:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:01.181 21:45:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:01.440 21:45:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:18:01.440 21:45:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:01.440 21:45:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.440 21:45:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:01.700 21:45:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.700 21:45:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:18:01.700 21:45:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:18:01.700 21:45:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.700 21:45:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:01.700 [2024-12-10 21:45:02.252478] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:01.700 21:45:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.700 21:45:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:01.700 21:45:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:01.700 21:45:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:01.700 21:45:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:01.700 21:45:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:01.700 21:45:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:01.700 21:45:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:01.700 21:45:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:01.700 21:45:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:01.700 21:45:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:01.700 21:45:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:01.700 21:45:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.700 21:45:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:01.700 21:45:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:01.700 21:45:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.700 21:45:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:01.700 "name": "Existed_Raid", 00:18:01.700 "uuid": "37f3f073-28da-463a-9a65-f9ecffee03b4", 00:18:01.700 "strip_size_kb": 64, 00:18:01.700 "state": "configuring", 00:18:01.700 "raid_level": "raid5f", 00:18:01.700 "superblock": true, 00:18:01.700 "num_base_bdevs": 4, 00:18:01.700 "num_base_bdevs_discovered": 2, 00:18:01.700 "num_base_bdevs_operational": 4, 00:18:01.700 "base_bdevs_list": [ 00:18:01.700 { 00:18:01.700 "name": "BaseBdev1", 00:18:01.700 "uuid": "7ac8d6c6-887b-4977-b040-a802651f5971", 00:18:01.700 "is_configured": true, 00:18:01.700 "data_offset": 2048, 00:18:01.700 "data_size": 63488 00:18:01.700 }, 00:18:01.700 { 00:18:01.700 "name": null, 00:18:01.700 "uuid": "ff398c5a-271b-4291-ac5d-344e2a56ade5", 00:18:01.700 "is_configured": false, 00:18:01.700 "data_offset": 0, 00:18:01.700 "data_size": 63488 00:18:01.700 }, 00:18:01.700 { 00:18:01.700 "name": null, 00:18:01.700 "uuid": "3c5341ec-1d86-4e0a-9bba-b0ce3180f5e2", 00:18:01.700 "is_configured": false, 00:18:01.700 "data_offset": 0, 00:18:01.700 "data_size": 63488 00:18:01.700 }, 00:18:01.700 { 00:18:01.700 "name": "BaseBdev4", 00:18:01.700 "uuid": "f08785cc-7e70-437b-abd9-7a2a3f952aeb", 00:18:01.701 "is_configured": true, 00:18:01.701 "data_offset": 2048, 00:18:01.701 "data_size": 63488 00:18:01.701 } 00:18:01.701 ] 00:18:01.701 }' 00:18:01.701 21:45:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:01.701 21:45:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:01.960 21:45:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:18:01.960 21:45:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:01.960 21:45:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.960 21:45:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:01.960 21:45:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.960 21:45:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:18:01.960 21:45:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:18:01.960 21:45:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.960 21:45:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:01.960 [2024-12-10 21:45:02.679936] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:01.960 21:45:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.960 21:45:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:01.960 21:45:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:01.960 21:45:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:01.960 21:45:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:01.960 21:45:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:01.960 21:45:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:01.960 21:45:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:01.960 21:45:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:01.960 21:45:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:01.960 21:45:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:01.960 21:45:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:01.960 21:45:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:01.960 21:45:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.960 21:45:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:01.960 21:45:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.219 21:45:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:02.219 "name": "Existed_Raid", 00:18:02.219 "uuid": "37f3f073-28da-463a-9a65-f9ecffee03b4", 00:18:02.219 "strip_size_kb": 64, 00:18:02.219 "state": "configuring", 00:18:02.219 "raid_level": "raid5f", 00:18:02.219 "superblock": true, 00:18:02.219 "num_base_bdevs": 4, 00:18:02.219 "num_base_bdevs_discovered": 3, 00:18:02.219 "num_base_bdevs_operational": 4, 00:18:02.219 "base_bdevs_list": [ 00:18:02.219 { 00:18:02.219 "name": "BaseBdev1", 00:18:02.219 "uuid": "7ac8d6c6-887b-4977-b040-a802651f5971", 00:18:02.219 "is_configured": true, 00:18:02.219 "data_offset": 2048, 00:18:02.219 "data_size": 63488 00:18:02.219 }, 00:18:02.219 { 00:18:02.219 "name": null, 00:18:02.219 "uuid": "ff398c5a-271b-4291-ac5d-344e2a56ade5", 00:18:02.219 "is_configured": false, 00:18:02.219 "data_offset": 0, 00:18:02.219 "data_size": 63488 00:18:02.219 }, 00:18:02.219 { 00:18:02.219 "name": "BaseBdev3", 00:18:02.219 "uuid": "3c5341ec-1d86-4e0a-9bba-b0ce3180f5e2", 00:18:02.219 "is_configured": true, 00:18:02.219 "data_offset": 2048, 00:18:02.219 "data_size": 63488 00:18:02.219 }, 00:18:02.219 { 00:18:02.219 "name": "BaseBdev4", 00:18:02.219 "uuid": "f08785cc-7e70-437b-abd9-7a2a3f952aeb", 00:18:02.219 "is_configured": true, 00:18:02.219 "data_offset": 2048, 00:18:02.219 "data_size": 63488 00:18:02.219 } 00:18:02.219 ] 00:18:02.219 }' 00:18:02.219 21:45:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:02.219 21:45:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:02.478 21:45:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:18:02.478 21:45:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:02.478 21:45:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.478 21:45:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:02.478 21:45:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.478 21:45:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:18:02.478 21:45:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:18:02.478 21:45:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.478 21:45:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:02.478 [2024-12-10 21:45:03.191087] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:02.737 21:45:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.737 21:45:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:02.737 21:45:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:02.737 21:45:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:02.737 21:45:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:02.737 21:45:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:02.737 21:45:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:02.737 21:45:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:02.737 21:45:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:02.737 21:45:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:02.737 21:45:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:02.737 21:45:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:02.737 21:45:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:02.737 21:45:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.737 21:45:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:02.737 21:45:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.737 21:45:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:02.737 "name": "Existed_Raid", 00:18:02.737 "uuid": "37f3f073-28da-463a-9a65-f9ecffee03b4", 00:18:02.737 "strip_size_kb": 64, 00:18:02.737 "state": "configuring", 00:18:02.737 "raid_level": "raid5f", 00:18:02.737 "superblock": true, 00:18:02.737 "num_base_bdevs": 4, 00:18:02.737 "num_base_bdevs_discovered": 2, 00:18:02.737 "num_base_bdevs_operational": 4, 00:18:02.737 "base_bdevs_list": [ 00:18:02.737 { 00:18:02.737 "name": null, 00:18:02.737 "uuid": "7ac8d6c6-887b-4977-b040-a802651f5971", 00:18:02.737 "is_configured": false, 00:18:02.737 "data_offset": 0, 00:18:02.737 "data_size": 63488 00:18:02.737 }, 00:18:02.737 { 00:18:02.737 "name": null, 00:18:02.737 "uuid": "ff398c5a-271b-4291-ac5d-344e2a56ade5", 00:18:02.737 "is_configured": false, 00:18:02.737 "data_offset": 0, 00:18:02.737 "data_size": 63488 00:18:02.737 }, 00:18:02.737 { 00:18:02.737 "name": "BaseBdev3", 00:18:02.737 "uuid": "3c5341ec-1d86-4e0a-9bba-b0ce3180f5e2", 00:18:02.737 "is_configured": true, 00:18:02.737 "data_offset": 2048, 00:18:02.737 "data_size": 63488 00:18:02.737 }, 00:18:02.737 { 00:18:02.737 "name": "BaseBdev4", 00:18:02.737 "uuid": "f08785cc-7e70-437b-abd9-7a2a3f952aeb", 00:18:02.737 "is_configured": true, 00:18:02.737 "data_offset": 2048, 00:18:02.737 "data_size": 63488 00:18:02.737 } 00:18:02.737 ] 00:18:02.737 }' 00:18:02.737 21:45:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:02.737 21:45:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:02.997 21:45:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:18:02.997 21:45:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:02.997 21:45:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.997 21:45:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:02.997 21:45:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.997 21:45:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:18:02.997 21:45:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:18:02.997 21:45:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.997 21:45:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:02.997 [2024-12-10 21:45:03.729202] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:02.997 21:45:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.997 21:45:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:02.997 21:45:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:02.997 21:45:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:02.997 21:45:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:02.997 21:45:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:02.997 21:45:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:02.997 21:45:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:02.997 21:45:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:02.997 21:45:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:02.997 21:45:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:02.997 21:45:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:02.997 21:45:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:02.997 21:45:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.997 21:45:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:02.997 21:45:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.997 21:45:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:02.997 "name": "Existed_Raid", 00:18:02.997 "uuid": "37f3f073-28da-463a-9a65-f9ecffee03b4", 00:18:02.997 "strip_size_kb": 64, 00:18:02.997 "state": "configuring", 00:18:02.997 "raid_level": "raid5f", 00:18:02.997 "superblock": true, 00:18:02.997 "num_base_bdevs": 4, 00:18:02.997 "num_base_bdevs_discovered": 3, 00:18:02.997 "num_base_bdevs_operational": 4, 00:18:02.997 "base_bdevs_list": [ 00:18:02.997 { 00:18:02.997 "name": null, 00:18:02.997 "uuid": "7ac8d6c6-887b-4977-b040-a802651f5971", 00:18:02.997 "is_configured": false, 00:18:02.997 "data_offset": 0, 00:18:02.997 "data_size": 63488 00:18:02.997 }, 00:18:02.997 { 00:18:02.997 "name": "BaseBdev2", 00:18:02.997 "uuid": "ff398c5a-271b-4291-ac5d-344e2a56ade5", 00:18:02.997 "is_configured": true, 00:18:02.997 "data_offset": 2048, 00:18:02.997 "data_size": 63488 00:18:02.997 }, 00:18:02.997 { 00:18:02.997 "name": "BaseBdev3", 00:18:02.997 "uuid": "3c5341ec-1d86-4e0a-9bba-b0ce3180f5e2", 00:18:02.997 "is_configured": true, 00:18:02.997 "data_offset": 2048, 00:18:02.997 "data_size": 63488 00:18:02.997 }, 00:18:02.997 { 00:18:02.997 "name": "BaseBdev4", 00:18:02.997 "uuid": "f08785cc-7e70-437b-abd9-7a2a3f952aeb", 00:18:02.998 "is_configured": true, 00:18:02.998 "data_offset": 2048, 00:18:02.998 "data_size": 63488 00:18:02.998 } 00:18:02.998 ] 00:18:02.998 }' 00:18:02.998 21:45:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:02.998 21:45:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:03.587 21:45:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:18:03.587 21:45:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:03.587 21:45:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.587 21:45:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:03.587 21:45:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.587 21:45:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:18:03.587 21:45:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:18:03.587 21:45:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:03.587 21:45:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.587 21:45:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:03.587 21:45:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.587 21:45:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 7ac8d6c6-887b-4977-b040-a802651f5971 00:18:03.587 21:45:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.587 21:45:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:03.587 [2024-12-10 21:45:04.228604] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:18:03.587 [2024-12-10 21:45:04.228834] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:18:03.587 [2024-12-10 21:45:04.228846] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:18:03.587 [2024-12-10 21:45:04.229088] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:18:03.587 NewBaseBdev 00:18:03.587 21:45:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.587 21:45:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:18:03.587 21:45:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:18:03.588 21:45:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:03.588 21:45:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:18:03.588 21:45:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:03.588 21:45:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:03.588 21:45:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:03.588 21:45:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.588 21:45:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:03.588 [2024-12-10 21:45:04.236299] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:18:03.588 [2024-12-10 21:45:04.236371] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:18:03.588 [2024-12-10 21:45:04.236674] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:03.588 21:45:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.588 21:45:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:18:03.588 21:45:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.588 21:45:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:03.588 [ 00:18:03.588 { 00:18:03.588 "name": "NewBaseBdev", 00:18:03.588 "aliases": [ 00:18:03.588 "7ac8d6c6-887b-4977-b040-a802651f5971" 00:18:03.588 ], 00:18:03.588 "product_name": "Malloc disk", 00:18:03.588 "block_size": 512, 00:18:03.588 "num_blocks": 65536, 00:18:03.588 "uuid": "7ac8d6c6-887b-4977-b040-a802651f5971", 00:18:03.588 "assigned_rate_limits": { 00:18:03.588 "rw_ios_per_sec": 0, 00:18:03.588 "rw_mbytes_per_sec": 0, 00:18:03.588 "r_mbytes_per_sec": 0, 00:18:03.588 "w_mbytes_per_sec": 0 00:18:03.588 }, 00:18:03.588 "claimed": true, 00:18:03.588 "claim_type": "exclusive_write", 00:18:03.588 "zoned": false, 00:18:03.588 "supported_io_types": { 00:18:03.588 "read": true, 00:18:03.588 "write": true, 00:18:03.588 "unmap": true, 00:18:03.588 "flush": true, 00:18:03.588 "reset": true, 00:18:03.588 "nvme_admin": false, 00:18:03.588 "nvme_io": false, 00:18:03.588 "nvme_io_md": false, 00:18:03.588 "write_zeroes": true, 00:18:03.588 "zcopy": true, 00:18:03.588 "get_zone_info": false, 00:18:03.588 "zone_management": false, 00:18:03.588 "zone_append": false, 00:18:03.588 "compare": false, 00:18:03.588 "compare_and_write": false, 00:18:03.588 "abort": true, 00:18:03.588 "seek_hole": false, 00:18:03.588 "seek_data": false, 00:18:03.588 "copy": true, 00:18:03.588 "nvme_iov_md": false 00:18:03.588 }, 00:18:03.588 "memory_domains": [ 00:18:03.588 { 00:18:03.588 "dma_device_id": "system", 00:18:03.588 "dma_device_type": 1 00:18:03.588 }, 00:18:03.588 { 00:18:03.588 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:03.588 "dma_device_type": 2 00:18:03.588 } 00:18:03.588 ], 00:18:03.588 "driver_specific": {} 00:18:03.588 } 00:18:03.588 ] 00:18:03.588 21:45:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.588 21:45:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:18:03.588 21:45:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:18:03.588 21:45:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:03.588 21:45:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:03.588 21:45:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:03.588 21:45:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:03.588 21:45:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:03.588 21:45:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:03.588 21:45:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:03.588 21:45:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:03.588 21:45:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:03.588 21:45:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:03.588 21:45:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:03.588 21:45:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.588 21:45:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:03.588 21:45:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.588 21:45:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:03.588 "name": "Existed_Raid", 00:18:03.588 "uuid": "37f3f073-28da-463a-9a65-f9ecffee03b4", 00:18:03.588 "strip_size_kb": 64, 00:18:03.588 "state": "online", 00:18:03.588 "raid_level": "raid5f", 00:18:03.588 "superblock": true, 00:18:03.588 "num_base_bdevs": 4, 00:18:03.588 "num_base_bdevs_discovered": 4, 00:18:03.588 "num_base_bdevs_operational": 4, 00:18:03.588 "base_bdevs_list": [ 00:18:03.588 { 00:18:03.588 "name": "NewBaseBdev", 00:18:03.588 "uuid": "7ac8d6c6-887b-4977-b040-a802651f5971", 00:18:03.588 "is_configured": true, 00:18:03.588 "data_offset": 2048, 00:18:03.588 "data_size": 63488 00:18:03.588 }, 00:18:03.588 { 00:18:03.588 "name": "BaseBdev2", 00:18:03.588 "uuid": "ff398c5a-271b-4291-ac5d-344e2a56ade5", 00:18:03.588 "is_configured": true, 00:18:03.588 "data_offset": 2048, 00:18:03.588 "data_size": 63488 00:18:03.588 }, 00:18:03.588 { 00:18:03.588 "name": "BaseBdev3", 00:18:03.588 "uuid": "3c5341ec-1d86-4e0a-9bba-b0ce3180f5e2", 00:18:03.588 "is_configured": true, 00:18:03.588 "data_offset": 2048, 00:18:03.588 "data_size": 63488 00:18:03.588 }, 00:18:03.588 { 00:18:03.588 "name": "BaseBdev4", 00:18:03.588 "uuid": "f08785cc-7e70-437b-abd9-7a2a3f952aeb", 00:18:03.588 "is_configured": true, 00:18:03.588 "data_offset": 2048, 00:18:03.588 "data_size": 63488 00:18:03.588 } 00:18:03.588 ] 00:18:03.588 }' 00:18:03.588 21:45:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:03.588 21:45:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:04.204 21:45:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:18:04.204 21:45:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:18:04.204 21:45:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:04.204 21:45:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:04.204 21:45:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:18:04.204 21:45:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:04.204 21:45:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:04.204 21:45:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:18:04.204 21:45:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.204 21:45:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:04.204 [2024-12-10 21:45:04.756447] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:04.204 21:45:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.204 21:45:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:04.204 "name": "Existed_Raid", 00:18:04.204 "aliases": [ 00:18:04.204 "37f3f073-28da-463a-9a65-f9ecffee03b4" 00:18:04.204 ], 00:18:04.204 "product_name": "Raid Volume", 00:18:04.204 "block_size": 512, 00:18:04.204 "num_blocks": 190464, 00:18:04.204 "uuid": "37f3f073-28da-463a-9a65-f9ecffee03b4", 00:18:04.204 "assigned_rate_limits": { 00:18:04.204 "rw_ios_per_sec": 0, 00:18:04.204 "rw_mbytes_per_sec": 0, 00:18:04.204 "r_mbytes_per_sec": 0, 00:18:04.204 "w_mbytes_per_sec": 0 00:18:04.204 }, 00:18:04.205 "claimed": false, 00:18:04.205 "zoned": false, 00:18:04.205 "supported_io_types": { 00:18:04.205 "read": true, 00:18:04.205 "write": true, 00:18:04.205 "unmap": false, 00:18:04.205 "flush": false, 00:18:04.205 "reset": true, 00:18:04.205 "nvme_admin": false, 00:18:04.205 "nvme_io": false, 00:18:04.205 "nvme_io_md": false, 00:18:04.205 "write_zeroes": true, 00:18:04.205 "zcopy": false, 00:18:04.205 "get_zone_info": false, 00:18:04.205 "zone_management": false, 00:18:04.205 "zone_append": false, 00:18:04.205 "compare": false, 00:18:04.205 "compare_and_write": false, 00:18:04.205 "abort": false, 00:18:04.205 "seek_hole": false, 00:18:04.205 "seek_data": false, 00:18:04.205 "copy": false, 00:18:04.205 "nvme_iov_md": false 00:18:04.205 }, 00:18:04.205 "driver_specific": { 00:18:04.205 "raid": { 00:18:04.205 "uuid": "37f3f073-28da-463a-9a65-f9ecffee03b4", 00:18:04.205 "strip_size_kb": 64, 00:18:04.205 "state": "online", 00:18:04.205 "raid_level": "raid5f", 00:18:04.205 "superblock": true, 00:18:04.205 "num_base_bdevs": 4, 00:18:04.205 "num_base_bdevs_discovered": 4, 00:18:04.205 "num_base_bdevs_operational": 4, 00:18:04.205 "base_bdevs_list": [ 00:18:04.205 { 00:18:04.205 "name": "NewBaseBdev", 00:18:04.205 "uuid": "7ac8d6c6-887b-4977-b040-a802651f5971", 00:18:04.205 "is_configured": true, 00:18:04.205 "data_offset": 2048, 00:18:04.205 "data_size": 63488 00:18:04.205 }, 00:18:04.205 { 00:18:04.205 "name": "BaseBdev2", 00:18:04.205 "uuid": "ff398c5a-271b-4291-ac5d-344e2a56ade5", 00:18:04.205 "is_configured": true, 00:18:04.205 "data_offset": 2048, 00:18:04.205 "data_size": 63488 00:18:04.205 }, 00:18:04.205 { 00:18:04.205 "name": "BaseBdev3", 00:18:04.205 "uuid": "3c5341ec-1d86-4e0a-9bba-b0ce3180f5e2", 00:18:04.205 "is_configured": true, 00:18:04.205 "data_offset": 2048, 00:18:04.205 "data_size": 63488 00:18:04.205 }, 00:18:04.205 { 00:18:04.205 "name": "BaseBdev4", 00:18:04.205 "uuid": "f08785cc-7e70-437b-abd9-7a2a3f952aeb", 00:18:04.205 "is_configured": true, 00:18:04.205 "data_offset": 2048, 00:18:04.205 "data_size": 63488 00:18:04.205 } 00:18:04.205 ] 00:18:04.205 } 00:18:04.205 } 00:18:04.205 }' 00:18:04.205 21:45:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:04.205 21:45:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:18:04.205 BaseBdev2 00:18:04.205 BaseBdev3 00:18:04.205 BaseBdev4' 00:18:04.205 21:45:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:04.205 21:45:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:18:04.205 21:45:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:04.205 21:45:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:18:04.205 21:45:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:04.205 21:45:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.205 21:45:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:04.205 21:45:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.205 21:45:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:04.205 21:45:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:04.205 21:45:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:04.205 21:45:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:18:04.205 21:45:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.205 21:45:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:04.205 21:45:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:04.205 21:45:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.205 21:45:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:04.205 21:45:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:04.205 21:45:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:04.205 21:45:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:18:04.205 21:45:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:04.205 21:45:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.205 21:45:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:04.471 21:45:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.471 21:45:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:04.471 21:45:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:04.471 21:45:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:04.471 21:45:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:18:04.471 21:45:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.471 21:45:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:04.471 21:45:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:04.471 21:45:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.471 21:45:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:04.471 21:45:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:04.471 21:45:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:04.471 21:45:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.471 21:45:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:04.471 [2024-12-10 21:45:05.071798] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:04.471 [2024-12-10 21:45:05.071880] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:04.471 [2024-12-10 21:45:05.071991] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:04.471 [2024-12-10 21:45:05.072372] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:04.471 [2024-12-10 21:45:05.072448] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:18:04.471 21:45:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.471 21:45:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 83633 00:18:04.471 21:45:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 83633 ']' 00:18:04.471 21:45:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 83633 00:18:04.471 21:45:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:18:04.471 21:45:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:04.471 21:45:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83633 00:18:04.471 21:45:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:04.471 21:45:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:04.471 killing process with pid 83633 00:18:04.471 21:45:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83633' 00:18:04.471 21:45:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 83633 00:18:04.471 [2024-12-10 21:45:05.111562] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:04.471 21:45:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 83633 00:18:04.731 [2024-12-10 21:45:05.507048] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:06.125 ************************************ 00:18:06.125 END TEST raid5f_state_function_test_sb 00:18:06.125 ************************************ 00:18:06.125 21:45:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:18:06.125 00:18:06.125 real 0m11.255s 00:18:06.125 user 0m17.906s 00:18:06.125 sys 0m2.013s 00:18:06.126 21:45:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:06.126 21:45:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:06.126 21:45:06 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 4 00:18:06.126 21:45:06 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:18:06.126 21:45:06 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:06.126 21:45:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:06.126 ************************************ 00:18:06.126 START TEST raid5f_superblock_test 00:18:06.126 ************************************ 00:18:06.126 21:45:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid5f 4 00:18:06.126 21:45:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:18:06.126 21:45:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:18:06.126 21:45:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:18:06.126 21:45:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:18:06.126 21:45:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:18:06.126 21:45:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:18:06.126 21:45:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:18:06.126 21:45:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:18:06.126 21:45:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:18:06.126 21:45:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:18:06.126 21:45:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:18:06.126 21:45:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:18:06.126 21:45:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:18:06.126 21:45:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:18:06.126 21:45:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:18:06.126 21:45:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:18:06.126 21:45:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=84298 00:18:06.126 21:45:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:18:06.126 21:45:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 84298 00:18:06.126 21:45:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 84298 ']' 00:18:06.126 21:45:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:06.126 21:45:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:06.126 21:45:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:06.126 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:06.126 21:45:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:06.126 21:45:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:06.126 [2024-12-10 21:45:06.778403] Starting SPDK v25.01-pre git sha1 cec5ba284 / DPDK 24.03.0 initialization... 00:18:06.126 [2024-12-10 21:45:06.778633] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84298 ] 00:18:06.385 [2024-12-10 21:45:06.954221] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:06.385 [2024-12-10 21:45:07.073812] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:18:06.644 [2024-12-10 21:45:07.272101] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:06.644 [2024-12-10 21:45:07.272159] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:06.904 21:45:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:06.904 21:45:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:18:06.904 21:45:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:18:06.904 21:45:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:06.904 21:45:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:18:06.904 21:45:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:18:06.904 21:45:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:18:06.904 21:45:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:06.904 21:45:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:06.904 21:45:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:06.904 21:45:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:18:06.904 21:45:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.904 21:45:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:06.904 malloc1 00:18:06.904 21:45:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.904 21:45:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:06.904 21:45:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.904 21:45:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:06.904 [2024-12-10 21:45:07.678873] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:06.904 [2024-12-10 21:45:07.678933] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:06.904 [2024-12-10 21:45:07.678956] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:06.904 [2024-12-10 21:45:07.678965] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:06.904 [2024-12-10 21:45:07.681074] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:06.904 [2024-12-10 21:45:07.681116] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:07.165 pt1 00:18:07.165 21:45:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.165 21:45:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:07.165 21:45:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:07.165 21:45:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:18:07.165 21:45:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:18:07.165 21:45:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:18:07.165 21:45:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:07.165 21:45:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:07.165 21:45:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:07.165 21:45:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:18:07.165 21:45:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.165 21:45:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:07.165 malloc2 00:18:07.165 21:45:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.165 21:45:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:07.165 21:45:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.165 21:45:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:07.165 [2024-12-10 21:45:07.733290] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:07.165 [2024-12-10 21:45:07.733346] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:07.165 [2024-12-10 21:45:07.733369] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:07.165 [2024-12-10 21:45:07.733377] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:07.165 [2024-12-10 21:45:07.735452] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:07.165 [2024-12-10 21:45:07.735498] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:07.165 pt2 00:18:07.165 21:45:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.165 21:45:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:07.165 21:45:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:07.165 21:45:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:18:07.165 21:45:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:18:07.165 21:45:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:18:07.165 21:45:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:07.165 21:45:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:07.165 21:45:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:07.165 21:45:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:18:07.165 21:45:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.165 21:45:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:07.165 malloc3 00:18:07.165 21:45:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.165 21:45:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:18:07.165 21:45:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.165 21:45:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:07.165 [2024-12-10 21:45:07.802082] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:18:07.165 [2024-12-10 21:45:07.802182] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:07.165 [2024-12-10 21:45:07.802221] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:18:07.165 [2024-12-10 21:45:07.802253] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:07.165 [2024-12-10 21:45:07.804350] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:07.165 [2024-12-10 21:45:07.804433] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:18:07.165 pt3 00:18:07.165 21:45:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.165 21:45:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:07.165 21:45:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:07.165 21:45:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:18:07.165 21:45:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:18:07.165 21:45:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:18:07.165 21:45:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:07.165 21:45:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:07.165 21:45:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:07.165 21:45:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:18:07.165 21:45:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.165 21:45:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:07.165 malloc4 00:18:07.165 21:45:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.165 21:45:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:18:07.165 21:45:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.165 21:45:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:07.165 [2024-12-10 21:45:07.860019] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:18:07.165 [2024-12-10 21:45:07.860160] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:07.165 [2024-12-10 21:45:07.860199] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:18:07.165 [2024-12-10 21:45:07.860231] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:07.165 [2024-12-10 21:45:07.862277] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:07.165 [2024-12-10 21:45:07.862343] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:18:07.165 pt4 00:18:07.165 21:45:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.165 21:45:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:07.165 21:45:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:07.165 21:45:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:18:07.165 21:45:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.166 21:45:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:07.166 [2024-12-10 21:45:07.872043] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:07.166 [2024-12-10 21:45:07.873891] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:07.166 [2024-12-10 21:45:07.874016] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:07.166 [2024-12-10 21:45:07.874088] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:18:07.166 [2024-12-10 21:45:07.874327] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:07.166 [2024-12-10 21:45:07.874378] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:18:07.166 [2024-12-10 21:45:07.874650] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:18:07.166 [2024-12-10 21:45:07.881899] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:07.166 [2024-12-10 21:45:07.881958] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:07.166 [2024-12-10 21:45:07.882176] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:07.166 21:45:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.166 21:45:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:18:07.166 21:45:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:07.166 21:45:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:07.166 21:45:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:07.166 21:45:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:07.166 21:45:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:07.166 21:45:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:07.166 21:45:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:07.166 21:45:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:07.166 21:45:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:07.166 21:45:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:07.166 21:45:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:07.166 21:45:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.166 21:45:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:07.166 21:45:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.166 21:45:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:07.166 "name": "raid_bdev1", 00:18:07.166 "uuid": "2cb79322-2a63-4d77-bf45-733eb87e16ab", 00:18:07.166 "strip_size_kb": 64, 00:18:07.166 "state": "online", 00:18:07.166 "raid_level": "raid5f", 00:18:07.166 "superblock": true, 00:18:07.166 "num_base_bdevs": 4, 00:18:07.166 "num_base_bdevs_discovered": 4, 00:18:07.166 "num_base_bdevs_operational": 4, 00:18:07.166 "base_bdevs_list": [ 00:18:07.166 { 00:18:07.166 "name": "pt1", 00:18:07.166 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:07.166 "is_configured": true, 00:18:07.166 "data_offset": 2048, 00:18:07.166 "data_size": 63488 00:18:07.166 }, 00:18:07.166 { 00:18:07.166 "name": "pt2", 00:18:07.166 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:07.166 "is_configured": true, 00:18:07.166 "data_offset": 2048, 00:18:07.166 "data_size": 63488 00:18:07.166 }, 00:18:07.166 { 00:18:07.166 "name": "pt3", 00:18:07.166 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:07.166 "is_configured": true, 00:18:07.166 "data_offset": 2048, 00:18:07.166 "data_size": 63488 00:18:07.166 }, 00:18:07.166 { 00:18:07.166 "name": "pt4", 00:18:07.166 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:07.166 "is_configured": true, 00:18:07.166 "data_offset": 2048, 00:18:07.166 "data_size": 63488 00:18:07.166 } 00:18:07.166 ] 00:18:07.166 }' 00:18:07.166 21:45:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:07.166 21:45:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:07.734 21:45:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:18:07.734 21:45:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:18:07.734 21:45:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:07.734 21:45:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:07.734 21:45:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:18:07.734 21:45:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:07.734 21:45:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:07.734 21:45:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:07.734 21:45:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.734 21:45:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:07.734 [2024-12-10 21:45:08.318375] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:07.734 21:45:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.734 21:45:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:07.734 "name": "raid_bdev1", 00:18:07.734 "aliases": [ 00:18:07.734 "2cb79322-2a63-4d77-bf45-733eb87e16ab" 00:18:07.734 ], 00:18:07.734 "product_name": "Raid Volume", 00:18:07.734 "block_size": 512, 00:18:07.734 "num_blocks": 190464, 00:18:07.734 "uuid": "2cb79322-2a63-4d77-bf45-733eb87e16ab", 00:18:07.734 "assigned_rate_limits": { 00:18:07.734 "rw_ios_per_sec": 0, 00:18:07.734 "rw_mbytes_per_sec": 0, 00:18:07.734 "r_mbytes_per_sec": 0, 00:18:07.734 "w_mbytes_per_sec": 0 00:18:07.734 }, 00:18:07.734 "claimed": false, 00:18:07.734 "zoned": false, 00:18:07.734 "supported_io_types": { 00:18:07.734 "read": true, 00:18:07.734 "write": true, 00:18:07.734 "unmap": false, 00:18:07.734 "flush": false, 00:18:07.734 "reset": true, 00:18:07.734 "nvme_admin": false, 00:18:07.734 "nvme_io": false, 00:18:07.734 "nvme_io_md": false, 00:18:07.734 "write_zeroes": true, 00:18:07.734 "zcopy": false, 00:18:07.734 "get_zone_info": false, 00:18:07.734 "zone_management": false, 00:18:07.734 "zone_append": false, 00:18:07.734 "compare": false, 00:18:07.734 "compare_and_write": false, 00:18:07.734 "abort": false, 00:18:07.734 "seek_hole": false, 00:18:07.734 "seek_data": false, 00:18:07.734 "copy": false, 00:18:07.734 "nvme_iov_md": false 00:18:07.734 }, 00:18:07.734 "driver_specific": { 00:18:07.734 "raid": { 00:18:07.734 "uuid": "2cb79322-2a63-4d77-bf45-733eb87e16ab", 00:18:07.734 "strip_size_kb": 64, 00:18:07.734 "state": "online", 00:18:07.734 "raid_level": "raid5f", 00:18:07.734 "superblock": true, 00:18:07.734 "num_base_bdevs": 4, 00:18:07.734 "num_base_bdevs_discovered": 4, 00:18:07.734 "num_base_bdevs_operational": 4, 00:18:07.734 "base_bdevs_list": [ 00:18:07.734 { 00:18:07.734 "name": "pt1", 00:18:07.734 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:07.734 "is_configured": true, 00:18:07.734 "data_offset": 2048, 00:18:07.734 "data_size": 63488 00:18:07.734 }, 00:18:07.734 { 00:18:07.734 "name": "pt2", 00:18:07.734 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:07.734 "is_configured": true, 00:18:07.734 "data_offset": 2048, 00:18:07.734 "data_size": 63488 00:18:07.734 }, 00:18:07.734 { 00:18:07.734 "name": "pt3", 00:18:07.734 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:07.734 "is_configured": true, 00:18:07.734 "data_offset": 2048, 00:18:07.734 "data_size": 63488 00:18:07.734 }, 00:18:07.734 { 00:18:07.734 "name": "pt4", 00:18:07.734 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:07.734 "is_configured": true, 00:18:07.734 "data_offset": 2048, 00:18:07.734 "data_size": 63488 00:18:07.734 } 00:18:07.734 ] 00:18:07.735 } 00:18:07.735 } 00:18:07.735 }' 00:18:07.735 21:45:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:07.735 21:45:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:18:07.735 pt2 00:18:07.735 pt3 00:18:07.735 pt4' 00:18:07.735 21:45:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:07.735 21:45:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:18:07.735 21:45:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:07.735 21:45:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:18:07.735 21:45:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:07.735 21:45:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.735 21:45:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:07.735 21:45:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.735 21:45:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:07.735 21:45:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:07.735 21:45:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:07.735 21:45:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:18:07.735 21:45:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:07.735 21:45:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.735 21:45:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:07.735 21:45:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.994 21:45:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:07.994 21:45:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:07.994 21:45:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:07.994 21:45:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:18:07.994 21:45:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:07.994 21:45:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.994 21:45:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:07.994 21:45:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.994 21:45:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:07.994 21:45:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:07.994 21:45:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:07.994 21:45:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:18:07.994 21:45:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.994 21:45:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:07.994 21:45:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:07.994 21:45:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.994 21:45:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:07.994 21:45:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:07.994 21:45:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:07.994 21:45:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.994 21:45:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:07.994 21:45:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:18:07.994 [2024-12-10 21:45:08.637759] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:07.994 21:45:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.994 21:45:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=2cb79322-2a63-4d77-bf45-733eb87e16ab 00:18:07.994 21:45:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 2cb79322-2a63-4d77-bf45-733eb87e16ab ']' 00:18:07.994 21:45:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:07.994 21:45:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.994 21:45:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:07.994 [2024-12-10 21:45:08.685513] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:07.994 [2024-12-10 21:45:08.685539] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:07.994 [2024-12-10 21:45:08.685663] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:07.994 [2024-12-10 21:45:08.685761] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:07.994 [2024-12-10 21:45:08.685775] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:07.994 21:45:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.994 21:45:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:07.994 21:45:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:18:07.994 21:45:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.994 21:45:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:07.994 21:45:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.995 21:45:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:18:07.995 21:45:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:18:07.995 21:45:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:07.995 21:45:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:18:07.995 21:45:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.995 21:45:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:07.995 21:45:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.995 21:45:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:07.995 21:45:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:18:07.995 21:45:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.995 21:45:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:07.995 21:45:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.995 21:45:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:07.995 21:45:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:18:07.995 21:45:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.995 21:45:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:08.254 21:45:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.255 21:45:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:08.255 21:45:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:18:08.255 21:45:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.255 21:45:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:08.255 21:45:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.255 21:45:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:18:08.255 21:45:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:18:08.255 21:45:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.255 21:45:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:08.255 21:45:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.255 21:45:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:18:08.255 21:45:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:18:08.255 21:45:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:18:08.255 21:45:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:18:08.255 21:45:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:18:08.255 21:45:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:08.255 21:45:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:18:08.255 21:45:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:08.255 21:45:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:18:08.255 21:45:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.255 21:45:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:08.255 [2024-12-10 21:45:08.853267] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:18:08.255 [2024-12-10 21:45:08.855250] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:18:08.255 [2024-12-10 21:45:08.855345] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:18:08.255 [2024-12-10 21:45:08.855386] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:18:08.255 [2024-12-10 21:45:08.855448] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:18:08.255 [2024-12-10 21:45:08.855493] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:18:08.255 [2024-12-10 21:45:08.855512] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:18:08.255 [2024-12-10 21:45:08.855531] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:18:08.255 [2024-12-10 21:45:08.855545] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:08.255 [2024-12-10 21:45:08.855555] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:18:08.255 request: 00:18:08.255 { 00:18:08.255 "name": "raid_bdev1", 00:18:08.255 "raid_level": "raid5f", 00:18:08.255 "base_bdevs": [ 00:18:08.255 "malloc1", 00:18:08.255 "malloc2", 00:18:08.255 "malloc3", 00:18:08.255 "malloc4" 00:18:08.255 ], 00:18:08.255 "strip_size_kb": 64, 00:18:08.255 "superblock": false, 00:18:08.255 "method": "bdev_raid_create", 00:18:08.255 "req_id": 1 00:18:08.255 } 00:18:08.255 Got JSON-RPC error response 00:18:08.255 response: 00:18:08.255 { 00:18:08.255 "code": -17, 00:18:08.255 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:18:08.255 } 00:18:08.255 21:45:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:18:08.255 21:45:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:18:08.255 21:45:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:08.255 21:45:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:08.255 21:45:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:08.255 21:45:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:08.255 21:45:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.255 21:45:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:18:08.255 21:45:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:08.255 21:45:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.255 21:45:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:18:08.255 21:45:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:18:08.255 21:45:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:08.255 21:45:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.255 21:45:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:08.255 [2024-12-10 21:45:08.917112] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:08.255 [2024-12-10 21:45:08.917219] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:08.255 [2024-12-10 21:45:08.917266] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:18:08.255 [2024-12-10 21:45:08.917302] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:08.255 [2024-12-10 21:45:08.919661] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:08.255 [2024-12-10 21:45:08.919745] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:08.255 [2024-12-10 21:45:08.919857] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:08.255 [2024-12-10 21:45:08.919947] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:08.255 pt1 00:18:08.255 21:45:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.255 21:45:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:18:08.255 21:45:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:08.255 21:45:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:08.255 21:45:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:08.255 21:45:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:08.255 21:45:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:08.255 21:45:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:08.255 21:45:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:08.255 21:45:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:08.255 21:45:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:08.255 21:45:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:08.255 21:45:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.255 21:45:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:08.255 21:45:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:08.255 21:45:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.255 21:45:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:08.255 "name": "raid_bdev1", 00:18:08.255 "uuid": "2cb79322-2a63-4d77-bf45-733eb87e16ab", 00:18:08.255 "strip_size_kb": 64, 00:18:08.255 "state": "configuring", 00:18:08.255 "raid_level": "raid5f", 00:18:08.255 "superblock": true, 00:18:08.255 "num_base_bdevs": 4, 00:18:08.255 "num_base_bdevs_discovered": 1, 00:18:08.255 "num_base_bdevs_operational": 4, 00:18:08.255 "base_bdevs_list": [ 00:18:08.255 { 00:18:08.255 "name": "pt1", 00:18:08.255 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:08.255 "is_configured": true, 00:18:08.255 "data_offset": 2048, 00:18:08.255 "data_size": 63488 00:18:08.255 }, 00:18:08.255 { 00:18:08.255 "name": null, 00:18:08.255 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:08.255 "is_configured": false, 00:18:08.255 "data_offset": 2048, 00:18:08.255 "data_size": 63488 00:18:08.255 }, 00:18:08.255 { 00:18:08.255 "name": null, 00:18:08.255 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:08.255 "is_configured": false, 00:18:08.255 "data_offset": 2048, 00:18:08.255 "data_size": 63488 00:18:08.255 }, 00:18:08.255 { 00:18:08.255 "name": null, 00:18:08.255 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:08.255 "is_configured": false, 00:18:08.255 "data_offset": 2048, 00:18:08.255 "data_size": 63488 00:18:08.255 } 00:18:08.255 ] 00:18:08.255 }' 00:18:08.255 21:45:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:08.255 21:45:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:08.515 21:45:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:18:08.515 21:45:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:08.515 21:45:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.515 21:45:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:08.515 [2024-12-10 21:45:09.292549] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:08.515 [2024-12-10 21:45:09.292697] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:08.515 [2024-12-10 21:45:09.292742] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:18:08.515 [2024-12-10 21:45:09.292780] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:08.515 [2024-12-10 21:45:09.293301] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:08.515 [2024-12-10 21:45:09.293374] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:08.515 [2024-12-10 21:45:09.293518] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:08.515 [2024-12-10 21:45:09.293584] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:08.774 pt2 00:18:08.774 21:45:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.774 21:45:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:18:08.774 21:45:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.774 21:45:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:08.774 [2024-12-10 21:45:09.304534] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:18:08.774 21:45:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.774 21:45:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:18:08.774 21:45:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:08.774 21:45:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:08.774 21:45:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:08.774 21:45:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:08.774 21:45:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:08.774 21:45:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:08.774 21:45:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:08.774 21:45:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:08.774 21:45:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:08.774 21:45:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:08.774 21:45:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:08.774 21:45:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.774 21:45:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:08.775 21:45:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.775 21:45:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:08.775 "name": "raid_bdev1", 00:18:08.775 "uuid": "2cb79322-2a63-4d77-bf45-733eb87e16ab", 00:18:08.775 "strip_size_kb": 64, 00:18:08.775 "state": "configuring", 00:18:08.775 "raid_level": "raid5f", 00:18:08.775 "superblock": true, 00:18:08.775 "num_base_bdevs": 4, 00:18:08.775 "num_base_bdevs_discovered": 1, 00:18:08.775 "num_base_bdevs_operational": 4, 00:18:08.775 "base_bdevs_list": [ 00:18:08.775 { 00:18:08.775 "name": "pt1", 00:18:08.775 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:08.775 "is_configured": true, 00:18:08.775 "data_offset": 2048, 00:18:08.775 "data_size": 63488 00:18:08.775 }, 00:18:08.775 { 00:18:08.775 "name": null, 00:18:08.775 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:08.775 "is_configured": false, 00:18:08.775 "data_offset": 0, 00:18:08.775 "data_size": 63488 00:18:08.775 }, 00:18:08.775 { 00:18:08.775 "name": null, 00:18:08.775 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:08.775 "is_configured": false, 00:18:08.775 "data_offset": 2048, 00:18:08.775 "data_size": 63488 00:18:08.775 }, 00:18:08.775 { 00:18:08.775 "name": null, 00:18:08.775 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:08.775 "is_configured": false, 00:18:08.775 "data_offset": 2048, 00:18:08.775 "data_size": 63488 00:18:08.775 } 00:18:08.775 ] 00:18:08.775 }' 00:18:08.775 21:45:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:08.775 21:45:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:09.033 21:45:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:18:09.033 21:45:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:09.033 21:45:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:09.033 21:45:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.033 21:45:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:09.033 [2024-12-10 21:45:09.759786] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:09.033 [2024-12-10 21:45:09.759902] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:09.033 [2024-12-10 21:45:09.759939] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:18:09.033 [2024-12-10 21:45:09.759966] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:09.033 [2024-12-10 21:45:09.760454] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:09.033 [2024-12-10 21:45:09.760520] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:09.033 [2024-12-10 21:45:09.760631] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:09.033 [2024-12-10 21:45:09.760686] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:09.033 pt2 00:18:09.034 21:45:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.034 21:45:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:18:09.034 21:45:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:09.034 21:45:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:18:09.034 21:45:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.034 21:45:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:09.034 [2024-12-10 21:45:09.771731] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:18:09.034 [2024-12-10 21:45:09.771820] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:09.034 [2024-12-10 21:45:09.771853] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:18:09.034 [2024-12-10 21:45:09.771879] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:09.034 [2024-12-10 21:45:09.772301] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:09.034 [2024-12-10 21:45:09.772366] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:18:09.034 [2024-12-10 21:45:09.772478] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:18:09.034 [2024-12-10 21:45:09.772537] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:09.034 pt3 00:18:09.034 21:45:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.034 21:45:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:18:09.034 21:45:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:09.034 21:45:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:18:09.034 21:45:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.034 21:45:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:09.034 [2024-12-10 21:45:09.783680] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:18:09.034 [2024-12-10 21:45:09.783722] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:09.034 [2024-12-10 21:45:09.783752] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:18:09.034 [2024-12-10 21:45:09.783760] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:09.034 [2024-12-10 21:45:09.784101] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:09.034 [2024-12-10 21:45:09.784117] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:18:09.034 [2024-12-10 21:45:09.784174] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:18:09.034 [2024-12-10 21:45:09.784192] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:18:09.034 [2024-12-10 21:45:09.784321] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:18:09.034 [2024-12-10 21:45:09.784330] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:18:09.034 [2024-12-10 21:45:09.784590] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:18:09.034 [2024-12-10 21:45:09.791595] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:18:09.034 [2024-12-10 21:45:09.791617] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:18:09.034 [2024-12-10 21:45:09.791790] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:09.034 pt4 00:18:09.034 21:45:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.034 21:45:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:18:09.034 21:45:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:09.034 21:45:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:18:09.034 21:45:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:09.034 21:45:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:09.034 21:45:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:09.034 21:45:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:09.034 21:45:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:09.034 21:45:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:09.034 21:45:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:09.034 21:45:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:09.034 21:45:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:09.034 21:45:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:09.034 21:45:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:09.034 21:45:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.034 21:45:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:09.292 21:45:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.292 21:45:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:09.292 "name": "raid_bdev1", 00:18:09.292 "uuid": "2cb79322-2a63-4d77-bf45-733eb87e16ab", 00:18:09.292 "strip_size_kb": 64, 00:18:09.292 "state": "online", 00:18:09.292 "raid_level": "raid5f", 00:18:09.292 "superblock": true, 00:18:09.292 "num_base_bdevs": 4, 00:18:09.292 "num_base_bdevs_discovered": 4, 00:18:09.292 "num_base_bdevs_operational": 4, 00:18:09.292 "base_bdevs_list": [ 00:18:09.292 { 00:18:09.292 "name": "pt1", 00:18:09.292 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:09.292 "is_configured": true, 00:18:09.292 "data_offset": 2048, 00:18:09.292 "data_size": 63488 00:18:09.292 }, 00:18:09.292 { 00:18:09.292 "name": "pt2", 00:18:09.292 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:09.292 "is_configured": true, 00:18:09.292 "data_offset": 2048, 00:18:09.292 "data_size": 63488 00:18:09.292 }, 00:18:09.292 { 00:18:09.292 "name": "pt3", 00:18:09.292 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:09.292 "is_configured": true, 00:18:09.292 "data_offset": 2048, 00:18:09.292 "data_size": 63488 00:18:09.292 }, 00:18:09.292 { 00:18:09.292 "name": "pt4", 00:18:09.292 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:09.292 "is_configured": true, 00:18:09.292 "data_offset": 2048, 00:18:09.292 "data_size": 63488 00:18:09.292 } 00:18:09.292 ] 00:18:09.292 }' 00:18:09.292 21:45:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:09.292 21:45:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:09.552 21:45:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:18:09.552 21:45:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:18:09.552 21:45:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:09.552 21:45:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:09.552 21:45:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:18:09.552 21:45:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:09.552 21:45:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:09.552 21:45:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.552 21:45:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:09.552 21:45:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:09.552 [2024-12-10 21:45:10.223447] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:09.552 21:45:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.552 21:45:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:09.552 "name": "raid_bdev1", 00:18:09.552 "aliases": [ 00:18:09.552 "2cb79322-2a63-4d77-bf45-733eb87e16ab" 00:18:09.552 ], 00:18:09.552 "product_name": "Raid Volume", 00:18:09.552 "block_size": 512, 00:18:09.552 "num_blocks": 190464, 00:18:09.552 "uuid": "2cb79322-2a63-4d77-bf45-733eb87e16ab", 00:18:09.552 "assigned_rate_limits": { 00:18:09.552 "rw_ios_per_sec": 0, 00:18:09.552 "rw_mbytes_per_sec": 0, 00:18:09.552 "r_mbytes_per_sec": 0, 00:18:09.552 "w_mbytes_per_sec": 0 00:18:09.552 }, 00:18:09.552 "claimed": false, 00:18:09.552 "zoned": false, 00:18:09.552 "supported_io_types": { 00:18:09.552 "read": true, 00:18:09.552 "write": true, 00:18:09.552 "unmap": false, 00:18:09.552 "flush": false, 00:18:09.552 "reset": true, 00:18:09.552 "nvme_admin": false, 00:18:09.552 "nvme_io": false, 00:18:09.552 "nvme_io_md": false, 00:18:09.552 "write_zeroes": true, 00:18:09.552 "zcopy": false, 00:18:09.552 "get_zone_info": false, 00:18:09.552 "zone_management": false, 00:18:09.552 "zone_append": false, 00:18:09.552 "compare": false, 00:18:09.552 "compare_and_write": false, 00:18:09.552 "abort": false, 00:18:09.553 "seek_hole": false, 00:18:09.553 "seek_data": false, 00:18:09.553 "copy": false, 00:18:09.553 "nvme_iov_md": false 00:18:09.553 }, 00:18:09.553 "driver_specific": { 00:18:09.553 "raid": { 00:18:09.553 "uuid": "2cb79322-2a63-4d77-bf45-733eb87e16ab", 00:18:09.553 "strip_size_kb": 64, 00:18:09.553 "state": "online", 00:18:09.553 "raid_level": "raid5f", 00:18:09.553 "superblock": true, 00:18:09.553 "num_base_bdevs": 4, 00:18:09.553 "num_base_bdevs_discovered": 4, 00:18:09.553 "num_base_bdevs_operational": 4, 00:18:09.553 "base_bdevs_list": [ 00:18:09.553 { 00:18:09.553 "name": "pt1", 00:18:09.553 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:09.553 "is_configured": true, 00:18:09.553 "data_offset": 2048, 00:18:09.553 "data_size": 63488 00:18:09.553 }, 00:18:09.553 { 00:18:09.553 "name": "pt2", 00:18:09.553 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:09.553 "is_configured": true, 00:18:09.553 "data_offset": 2048, 00:18:09.553 "data_size": 63488 00:18:09.553 }, 00:18:09.553 { 00:18:09.553 "name": "pt3", 00:18:09.553 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:09.553 "is_configured": true, 00:18:09.553 "data_offset": 2048, 00:18:09.553 "data_size": 63488 00:18:09.553 }, 00:18:09.553 { 00:18:09.553 "name": "pt4", 00:18:09.553 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:09.553 "is_configured": true, 00:18:09.553 "data_offset": 2048, 00:18:09.553 "data_size": 63488 00:18:09.553 } 00:18:09.553 ] 00:18:09.553 } 00:18:09.553 } 00:18:09.553 }' 00:18:09.553 21:45:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:09.553 21:45:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:18:09.553 pt2 00:18:09.553 pt3 00:18:09.553 pt4' 00:18:09.553 21:45:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:09.813 21:45:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:18:09.813 21:45:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:09.813 21:45:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:18:09.813 21:45:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:09.813 21:45:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.813 21:45:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:09.813 21:45:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.813 21:45:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:09.813 21:45:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:09.813 21:45:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:09.813 21:45:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:18:09.813 21:45:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.813 21:45:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:09.813 21:45:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:09.813 21:45:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.813 21:45:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:09.813 21:45:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:09.813 21:45:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:09.813 21:45:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:18:09.813 21:45:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.813 21:45:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:09.813 21:45:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:09.813 21:45:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.813 21:45:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:09.813 21:45:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:09.813 21:45:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:09.813 21:45:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:18:09.813 21:45:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:09.813 21:45:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.813 21:45:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:09.813 21:45:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.813 21:45:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:09.813 21:45:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:09.813 21:45:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:09.813 21:45:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.813 21:45:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:09.813 21:45:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:18:09.813 [2024-12-10 21:45:10.550830] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:09.813 21:45:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.813 21:45:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 2cb79322-2a63-4d77-bf45-733eb87e16ab '!=' 2cb79322-2a63-4d77-bf45-733eb87e16ab ']' 00:18:09.813 21:45:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:18:09.813 21:45:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:09.813 21:45:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:18:09.813 21:45:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:18:09.813 21:45:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.813 21:45:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:10.073 [2024-12-10 21:45:10.598633] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:18:10.073 21:45:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.073 21:45:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:18:10.073 21:45:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:10.073 21:45:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:10.073 21:45:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:10.073 21:45:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:10.073 21:45:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:10.073 21:45:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:10.073 21:45:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:10.073 21:45:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:10.073 21:45:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:10.073 21:45:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:10.073 21:45:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:10.073 21:45:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.073 21:45:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:10.073 21:45:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.073 21:45:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:10.073 "name": "raid_bdev1", 00:18:10.073 "uuid": "2cb79322-2a63-4d77-bf45-733eb87e16ab", 00:18:10.073 "strip_size_kb": 64, 00:18:10.073 "state": "online", 00:18:10.073 "raid_level": "raid5f", 00:18:10.073 "superblock": true, 00:18:10.073 "num_base_bdevs": 4, 00:18:10.073 "num_base_bdevs_discovered": 3, 00:18:10.073 "num_base_bdevs_operational": 3, 00:18:10.073 "base_bdevs_list": [ 00:18:10.073 { 00:18:10.073 "name": null, 00:18:10.073 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:10.073 "is_configured": false, 00:18:10.073 "data_offset": 0, 00:18:10.073 "data_size": 63488 00:18:10.073 }, 00:18:10.073 { 00:18:10.073 "name": "pt2", 00:18:10.073 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:10.073 "is_configured": true, 00:18:10.073 "data_offset": 2048, 00:18:10.073 "data_size": 63488 00:18:10.073 }, 00:18:10.073 { 00:18:10.073 "name": "pt3", 00:18:10.073 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:10.073 "is_configured": true, 00:18:10.073 "data_offset": 2048, 00:18:10.073 "data_size": 63488 00:18:10.073 }, 00:18:10.073 { 00:18:10.073 "name": "pt4", 00:18:10.073 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:10.073 "is_configured": true, 00:18:10.073 "data_offset": 2048, 00:18:10.073 "data_size": 63488 00:18:10.073 } 00:18:10.073 ] 00:18:10.073 }' 00:18:10.073 21:45:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:10.073 21:45:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:10.332 21:45:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:10.332 21:45:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.332 21:45:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:10.332 [2024-12-10 21:45:11.013885] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:10.332 [2024-12-10 21:45:11.013972] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:10.332 [2024-12-10 21:45:11.014089] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:10.332 [2024-12-10 21:45:11.014180] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:10.333 [2024-12-10 21:45:11.014224] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:18:10.333 21:45:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.333 21:45:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:18:10.333 21:45:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:10.333 21:45:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.333 21:45:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:10.333 21:45:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.333 21:45:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:18:10.333 21:45:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:18:10.333 21:45:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:18:10.333 21:45:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:10.333 21:45:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:18:10.333 21:45:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.333 21:45:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:10.333 21:45:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.333 21:45:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:18:10.333 21:45:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:10.333 21:45:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:18:10.333 21:45:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.333 21:45:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:10.333 21:45:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.333 21:45:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:18:10.333 21:45:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:10.333 21:45:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:18:10.333 21:45:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.333 21:45:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:10.333 21:45:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.333 21:45:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:18:10.333 21:45:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:10.333 21:45:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:18:10.333 21:45:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:18:10.333 21:45:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:10.333 21:45:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.333 21:45:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:10.333 [2024-12-10 21:45:11.093716] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:10.333 [2024-12-10 21:45:11.093776] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:10.333 [2024-12-10 21:45:11.093808] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:18:10.333 [2024-12-10 21:45:11.093817] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:10.333 [2024-12-10 21:45:11.095908] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:10.333 [2024-12-10 21:45:11.095945] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:10.333 [2024-12-10 21:45:11.096021] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:10.333 [2024-12-10 21:45:11.096075] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:10.333 pt2 00:18:10.333 21:45:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.333 21:45:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:18:10.333 21:45:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:10.333 21:45:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:10.333 21:45:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:10.333 21:45:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:10.333 21:45:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:10.333 21:45:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:10.333 21:45:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:10.333 21:45:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:10.333 21:45:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:10.333 21:45:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:10.333 21:45:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:10.333 21:45:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.333 21:45:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:10.592 21:45:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.593 21:45:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:10.593 "name": "raid_bdev1", 00:18:10.593 "uuid": "2cb79322-2a63-4d77-bf45-733eb87e16ab", 00:18:10.593 "strip_size_kb": 64, 00:18:10.593 "state": "configuring", 00:18:10.593 "raid_level": "raid5f", 00:18:10.593 "superblock": true, 00:18:10.593 "num_base_bdevs": 4, 00:18:10.593 "num_base_bdevs_discovered": 1, 00:18:10.593 "num_base_bdevs_operational": 3, 00:18:10.593 "base_bdevs_list": [ 00:18:10.593 { 00:18:10.593 "name": null, 00:18:10.593 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:10.593 "is_configured": false, 00:18:10.593 "data_offset": 2048, 00:18:10.593 "data_size": 63488 00:18:10.593 }, 00:18:10.593 { 00:18:10.593 "name": "pt2", 00:18:10.593 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:10.593 "is_configured": true, 00:18:10.593 "data_offset": 2048, 00:18:10.593 "data_size": 63488 00:18:10.593 }, 00:18:10.593 { 00:18:10.593 "name": null, 00:18:10.593 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:10.593 "is_configured": false, 00:18:10.593 "data_offset": 2048, 00:18:10.593 "data_size": 63488 00:18:10.593 }, 00:18:10.593 { 00:18:10.593 "name": null, 00:18:10.593 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:10.593 "is_configured": false, 00:18:10.593 "data_offset": 2048, 00:18:10.593 "data_size": 63488 00:18:10.593 } 00:18:10.593 ] 00:18:10.593 }' 00:18:10.593 21:45:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:10.593 21:45:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:10.857 21:45:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:18:10.857 21:45:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:18:10.857 21:45:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:18:10.857 21:45:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.857 21:45:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:10.857 [2024-12-10 21:45:11.521024] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:18:10.857 [2024-12-10 21:45:11.521154] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:10.857 [2024-12-10 21:45:11.521197] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:18:10.857 [2024-12-10 21:45:11.521243] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:10.857 [2024-12-10 21:45:11.521693] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:10.857 [2024-12-10 21:45:11.521750] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:18:10.857 [2024-12-10 21:45:11.521856] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:18:10.857 [2024-12-10 21:45:11.521904] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:10.857 pt3 00:18:10.857 21:45:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.857 21:45:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:18:10.857 21:45:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:10.857 21:45:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:10.857 21:45:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:10.857 21:45:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:10.857 21:45:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:10.857 21:45:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:10.857 21:45:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:10.857 21:45:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:10.857 21:45:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:10.857 21:45:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:10.857 21:45:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:10.857 21:45:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.857 21:45:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:10.857 21:45:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.857 21:45:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:10.857 "name": "raid_bdev1", 00:18:10.857 "uuid": "2cb79322-2a63-4d77-bf45-733eb87e16ab", 00:18:10.857 "strip_size_kb": 64, 00:18:10.857 "state": "configuring", 00:18:10.857 "raid_level": "raid5f", 00:18:10.857 "superblock": true, 00:18:10.857 "num_base_bdevs": 4, 00:18:10.857 "num_base_bdevs_discovered": 2, 00:18:10.857 "num_base_bdevs_operational": 3, 00:18:10.857 "base_bdevs_list": [ 00:18:10.857 { 00:18:10.857 "name": null, 00:18:10.857 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:10.857 "is_configured": false, 00:18:10.857 "data_offset": 2048, 00:18:10.857 "data_size": 63488 00:18:10.857 }, 00:18:10.857 { 00:18:10.857 "name": "pt2", 00:18:10.857 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:10.857 "is_configured": true, 00:18:10.857 "data_offset": 2048, 00:18:10.857 "data_size": 63488 00:18:10.857 }, 00:18:10.857 { 00:18:10.857 "name": "pt3", 00:18:10.857 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:10.857 "is_configured": true, 00:18:10.857 "data_offset": 2048, 00:18:10.857 "data_size": 63488 00:18:10.857 }, 00:18:10.857 { 00:18:10.857 "name": null, 00:18:10.857 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:10.857 "is_configured": false, 00:18:10.857 "data_offset": 2048, 00:18:10.857 "data_size": 63488 00:18:10.857 } 00:18:10.857 ] 00:18:10.857 }' 00:18:10.857 21:45:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:10.857 21:45:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:11.431 21:45:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:18:11.431 21:45:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:18:11.431 21:45:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:18:11.432 21:45:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:18:11.432 21:45:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.432 21:45:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:11.432 [2024-12-10 21:45:11.928356] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:18:11.432 [2024-12-10 21:45:11.928491] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:11.432 [2024-12-10 21:45:11.928536] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:18:11.432 [2024-12-10 21:45:11.928566] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:11.432 [2024-12-10 21:45:11.929025] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:11.432 [2024-12-10 21:45:11.929089] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:18:11.432 [2024-12-10 21:45:11.929205] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:18:11.432 [2024-12-10 21:45:11.929262] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:18:11.432 [2024-12-10 21:45:11.929458] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:18:11.432 [2024-12-10 21:45:11.929496] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:18:11.432 [2024-12-10 21:45:11.929741] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:18:11.432 [2024-12-10 21:45:11.937029] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:18:11.432 [2024-12-10 21:45:11.937056] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:18:11.432 [2024-12-10 21:45:11.937359] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:11.432 pt4 00:18:11.432 21:45:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.432 21:45:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:18:11.432 21:45:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:11.432 21:45:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:11.432 21:45:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:11.432 21:45:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:11.432 21:45:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:11.432 21:45:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:11.432 21:45:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:11.432 21:45:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:11.432 21:45:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:11.432 21:45:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:11.432 21:45:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.432 21:45:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:11.432 21:45:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:11.432 21:45:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.432 21:45:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:11.432 "name": "raid_bdev1", 00:18:11.432 "uuid": "2cb79322-2a63-4d77-bf45-733eb87e16ab", 00:18:11.432 "strip_size_kb": 64, 00:18:11.432 "state": "online", 00:18:11.432 "raid_level": "raid5f", 00:18:11.432 "superblock": true, 00:18:11.432 "num_base_bdevs": 4, 00:18:11.432 "num_base_bdevs_discovered": 3, 00:18:11.432 "num_base_bdevs_operational": 3, 00:18:11.432 "base_bdevs_list": [ 00:18:11.432 { 00:18:11.432 "name": null, 00:18:11.432 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:11.432 "is_configured": false, 00:18:11.432 "data_offset": 2048, 00:18:11.432 "data_size": 63488 00:18:11.432 }, 00:18:11.432 { 00:18:11.432 "name": "pt2", 00:18:11.432 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:11.432 "is_configured": true, 00:18:11.432 "data_offset": 2048, 00:18:11.432 "data_size": 63488 00:18:11.432 }, 00:18:11.432 { 00:18:11.432 "name": "pt3", 00:18:11.432 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:11.432 "is_configured": true, 00:18:11.432 "data_offset": 2048, 00:18:11.432 "data_size": 63488 00:18:11.432 }, 00:18:11.432 { 00:18:11.432 "name": "pt4", 00:18:11.432 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:11.432 "is_configured": true, 00:18:11.432 "data_offset": 2048, 00:18:11.432 "data_size": 63488 00:18:11.432 } 00:18:11.432 ] 00:18:11.432 }' 00:18:11.432 21:45:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:11.432 21:45:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:11.692 21:45:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:11.692 21:45:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.692 21:45:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:11.692 [2024-12-10 21:45:12.409220] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:11.692 [2024-12-10 21:45:12.409324] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:11.692 [2024-12-10 21:45:12.409452] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:11.692 [2024-12-10 21:45:12.409546] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:11.692 [2024-12-10 21:45:12.409595] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:18:11.692 21:45:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.692 21:45:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:11.692 21:45:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:18:11.692 21:45:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.692 21:45:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:11.692 21:45:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.692 21:45:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:18:11.692 21:45:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:18:11.692 21:45:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:18:11.692 21:45:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:18:11.692 21:45:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:18:11.692 21:45:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.692 21:45:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:11.951 21:45:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.951 21:45:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:11.951 21:45:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.951 21:45:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:11.951 [2024-12-10 21:45:12.485076] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:11.951 [2024-12-10 21:45:12.485190] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:11.951 [2024-12-10 21:45:12.485236] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:18:11.951 [2024-12-10 21:45:12.485271] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:11.951 [2024-12-10 21:45:12.487502] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:11.951 [2024-12-10 21:45:12.487577] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:11.951 [2024-12-10 21:45:12.487669] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:11.951 [2024-12-10 21:45:12.487726] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:11.951 [2024-12-10 21:45:12.487861] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:18:11.951 [2024-12-10 21:45:12.487877] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:11.951 [2024-12-10 21:45:12.487892] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:18:11.951 [2024-12-10 21:45:12.487949] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:11.951 [2024-12-10 21:45:12.488049] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:11.951 pt1 00:18:11.952 21:45:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.952 21:45:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:18:11.952 21:45:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:18:11.952 21:45:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:11.952 21:45:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:11.952 21:45:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:11.952 21:45:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:11.952 21:45:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:11.952 21:45:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:11.952 21:45:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:11.952 21:45:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:11.952 21:45:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:11.952 21:45:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:11.952 21:45:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.952 21:45:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:11.952 21:45:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:11.952 21:45:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.952 21:45:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:11.952 "name": "raid_bdev1", 00:18:11.952 "uuid": "2cb79322-2a63-4d77-bf45-733eb87e16ab", 00:18:11.952 "strip_size_kb": 64, 00:18:11.952 "state": "configuring", 00:18:11.952 "raid_level": "raid5f", 00:18:11.952 "superblock": true, 00:18:11.952 "num_base_bdevs": 4, 00:18:11.952 "num_base_bdevs_discovered": 2, 00:18:11.952 "num_base_bdevs_operational": 3, 00:18:11.952 "base_bdevs_list": [ 00:18:11.952 { 00:18:11.952 "name": null, 00:18:11.952 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:11.952 "is_configured": false, 00:18:11.952 "data_offset": 2048, 00:18:11.952 "data_size": 63488 00:18:11.952 }, 00:18:11.952 { 00:18:11.952 "name": "pt2", 00:18:11.952 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:11.952 "is_configured": true, 00:18:11.952 "data_offset": 2048, 00:18:11.952 "data_size": 63488 00:18:11.952 }, 00:18:11.952 { 00:18:11.952 "name": "pt3", 00:18:11.952 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:11.952 "is_configured": true, 00:18:11.952 "data_offset": 2048, 00:18:11.952 "data_size": 63488 00:18:11.952 }, 00:18:11.952 { 00:18:11.952 "name": null, 00:18:11.952 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:11.952 "is_configured": false, 00:18:11.952 "data_offset": 2048, 00:18:11.952 "data_size": 63488 00:18:11.952 } 00:18:11.952 ] 00:18:11.952 }' 00:18:11.952 21:45:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:11.952 21:45:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:12.212 21:45:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:18:12.212 21:45:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:18:12.212 21:45:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.212 21:45:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:12.212 21:45:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.212 21:45:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:18:12.212 21:45:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:18:12.212 21:45:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.212 21:45:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:12.212 [2024-12-10 21:45:12.964337] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:18:12.212 [2024-12-10 21:45:12.964471] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:12.212 [2024-12-10 21:45:12.964517] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:18:12.212 [2024-12-10 21:45:12.964560] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:12.212 [2024-12-10 21:45:12.965064] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:12.212 [2024-12-10 21:45:12.965129] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:18:12.212 [2024-12-10 21:45:12.965250] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:18:12.212 [2024-12-10 21:45:12.965319] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:18:12.212 [2024-12-10 21:45:12.965496] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:18:12.212 [2024-12-10 21:45:12.965537] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:18:12.212 [2024-12-10 21:45:12.965810] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:18:12.212 [2024-12-10 21:45:12.973667] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:18:12.212 [2024-12-10 21:45:12.973730] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:18:12.212 [2024-12-10 21:45:12.974033] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:12.212 pt4 00:18:12.212 21:45:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.212 21:45:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:18:12.212 21:45:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:12.212 21:45:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:12.212 21:45:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:12.212 21:45:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:12.212 21:45:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:12.212 21:45:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:12.212 21:45:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:12.212 21:45:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:12.212 21:45:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:12.212 21:45:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:12.212 21:45:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:12.212 21:45:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.212 21:45:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:12.471 21:45:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.471 21:45:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:12.471 "name": "raid_bdev1", 00:18:12.471 "uuid": "2cb79322-2a63-4d77-bf45-733eb87e16ab", 00:18:12.471 "strip_size_kb": 64, 00:18:12.471 "state": "online", 00:18:12.471 "raid_level": "raid5f", 00:18:12.471 "superblock": true, 00:18:12.471 "num_base_bdevs": 4, 00:18:12.471 "num_base_bdevs_discovered": 3, 00:18:12.471 "num_base_bdevs_operational": 3, 00:18:12.471 "base_bdevs_list": [ 00:18:12.471 { 00:18:12.471 "name": null, 00:18:12.471 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:12.471 "is_configured": false, 00:18:12.471 "data_offset": 2048, 00:18:12.471 "data_size": 63488 00:18:12.471 }, 00:18:12.471 { 00:18:12.471 "name": "pt2", 00:18:12.471 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:12.471 "is_configured": true, 00:18:12.471 "data_offset": 2048, 00:18:12.471 "data_size": 63488 00:18:12.471 }, 00:18:12.471 { 00:18:12.471 "name": "pt3", 00:18:12.471 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:12.471 "is_configured": true, 00:18:12.471 "data_offset": 2048, 00:18:12.471 "data_size": 63488 00:18:12.471 }, 00:18:12.471 { 00:18:12.471 "name": "pt4", 00:18:12.471 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:12.471 "is_configured": true, 00:18:12.471 "data_offset": 2048, 00:18:12.471 "data_size": 63488 00:18:12.471 } 00:18:12.471 ] 00:18:12.471 }' 00:18:12.471 21:45:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:12.471 21:45:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:12.730 21:45:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:18:12.730 21:45:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:18:12.730 21:45:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.730 21:45:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:12.731 21:45:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.731 21:45:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:18:12.731 21:45:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:12.731 21:45:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.731 21:45:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:12.731 21:45:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:18:12.731 [2024-12-10 21:45:13.426910] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:12.731 21:45:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.731 21:45:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 2cb79322-2a63-4d77-bf45-733eb87e16ab '!=' 2cb79322-2a63-4d77-bf45-733eb87e16ab ']' 00:18:12.731 21:45:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 84298 00:18:12.731 21:45:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 84298 ']' 00:18:12.731 21:45:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # kill -0 84298 00:18:12.731 21:45:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # uname 00:18:12.731 21:45:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:12.731 21:45:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84298 00:18:12.731 21:45:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:12.731 21:45:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:12.731 21:45:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84298' 00:18:12.731 killing process with pid 84298 00:18:12.731 21:45:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@973 -- # kill 84298 00:18:12.731 [2024-12-10 21:45:13.503764] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:12.731 21:45:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@978 -- # wait 84298 00:18:12.731 [2024-12-10 21:45:13.503935] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:12.731 [2024-12-10 21:45:13.504038] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:12.731 [2024-12-10 21:45:13.504067] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:18:13.299 [2024-12-10 21:45:13.908351] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:14.677 21:45:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:18:14.677 00:18:14.677 real 0m8.368s 00:18:14.677 user 0m13.102s 00:18:14.677 sys 0m1.471s 00:18:14.677 ************************************ 00:18:14.678 END TEST raid5f_superblock_test 00:18:14.678 ************************************ 00:18:14.678 21:45:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:14.678 21:45:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:14.678 21:45:15 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:18:14.678 21:45:15 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 4 false false true 00:18:14.678 21:45:15 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:18:14.678 21:45:15 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:14.678 21:45:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:14.678 ************************************ 00:18:14.678 START TEST raid5f_rebuild_test 00:18:14.678 ************************************ 00:18:14.678 21:45:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 4 false false true 00:18:14.678 21:45:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:18:14.678 21:45:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:18:14.678 21:45:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:18:14.678 21:45:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:18:14.678 21:45:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:18:14.678 21:45:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:18:14.678 21:45:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:14.678 21:45:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:18:14.678 21:45:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:14.678 21:45:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:14.678 21:45:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:18:14.678 21:45:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:14.678 21:45:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:14.678 21:45:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:18:14.678 21:45:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:14.678 21:45:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:14.678 21:45:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:18:14.678 21:45:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:14.678 21:45:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:14.678 21:45:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:18:14.678 21:45:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:18:14.678 21:45:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:18:14.678 21:45:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:18:14.678 21:45:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:18:14.678 21:45:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:18:14.678 21:45:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:18:14.678 21:45:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:18:14.678 21:45:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:18:14.678 21:45:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:18:14.678 21:45:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:18:14.678 21:45:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:18:14.678 21:45:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=84782 00:18:14.678 21:45:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 84782 00:18:14.678 21:45:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:18:14.678 21:45:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 84782 ']' 00:18:14.678 21:45:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:14.678 21:45:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:14.678 21:45:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:14.678 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:14.678 21:45:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:14.678 21:45:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:14.678 [2024-12-10 21:45:15.233674] Starting SPDK v25.01-pre git sha1 cec5ba284 / DPDK 24.03.0 initialization... 00:18:14.678 [2024-12-10 21:45:15.233905] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84782 ] 00:18:14.678 I/O size of 3145728 is greater than zero copy threshold (65536). 00:18:14.678 Zero copy mechanism will not be used. 00:18:14.678 [2024-12-10 21:45:15.407771] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:14.937 [2024-12-10 21:45:15.525723] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:18:14.937 [2024-12-10 21:45:15.713369] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:14.937 [2024-12-10 21:45:15.713503] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:15.506 21:45:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:15.506 21:45:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:18:15.506 21:45:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:15.506 21:45:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:18:15.506 21:45:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.506 21:45:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:15.506 BaseBdev1_malloc 00:18:15.506 21:45:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.506 21:45:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:15.506 21:45:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.506 21:45:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:15.506 [2024-12-10 21:45:16.108000] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:15.506 [2024-12-10 21:45:16.108153] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:15.506 [2024-12-10 21:45:16.108197] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:15.506 [2024-12-10 21:45:16.108234] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:15.506 [2024-12-10 21:45:16.110478] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:15.506 [2024-12-10 21:45:16.110565] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:15.506 BaseBdev1 00:18:15.506 21:45:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.506 21:45:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:15.506 21:45:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:18:15.506 21:45:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.506 21:45:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:15.506 BaseBdev2_malloc 00:18:15.506 21:45:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.506 21:45:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:18:15.506 21:45:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.506 21:45:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:15.506 [2024-12-10 21:45:16.162116] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:18:15.506 [2024-12-10 21:45:16.162222] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:15.506 [2024-12-10 21:45:16.162275] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:15.506 [2024-12-10 21:45:16.162313] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:15.506 [2024-12-10 21:45:16.164413] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:15.506 [2024-12-10 21:45:16.164497] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:15.506 BaseBdev2 00:18:15.506 21:45:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.506 21:45:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:15.506 21:45:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:18:15.506 21:45:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.506 21:45:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:15.506 BaseBdev3_malloc 00:18:15.506 21:45:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.506 21:45:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:18:15.506 21:45:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.506 21:45:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:15.506 [2024-12-10 21:45:16.234102] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:18:15.506 [2024-12-10 21:45:16.234204] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:15.506 [2024-12-10 21:45:16.234264] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:18:15.506 [2024-12-10 21:45:16.234303] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:15.506 [2024-12-10 21:45:16.236465] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:15.506 [2024-12-10 21:45:16.236543] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:18:15.506 BaseBdev3 00:18:15.506 21:45:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.506 21:45:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:15.506 21:45:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:18:15.506 21:45:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.506 21:45:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:15.506 BaseBdev4_malloc 00:18:15.506 21:45:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.506 21:45:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:18:15.506 21:45:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.506 21:45:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:15.766 [2024-12-10 21:45:16.290717] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:18:15.766 [2024-12-10 21:45:16.290826] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:15.766 [2024-12-10 21:45:16.290852] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:18:15.766 [2024-12-10 21:45:16.290863] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:15.766 [2024-12-10 21:45:16.292995] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:15.766 [2024-12-10 21:45:16.293043] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:18:15.766 BaseBdev4 00:18:15.766 21:45:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.766 21:45:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:18:15.766 21:45:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.766 21:45:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:15.766 spare_malloc 00:18:15.766 21:45:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.766 21:45:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:18:15.766 21:45:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.766 21:45:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:15.766 spare_delay 00:18:15.766 21:45:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.766 21:45:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:15.766 21:45:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.766 21:45:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:15.766 [2024-12-10 21:45:16.355075] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:15.766 [2024-12-10 21:45:16.355133] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:15.766 [2024-12-10 21:45:16.355151] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:18:15.766 [2024-12-10 21:45:16.355162] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:15.766 [2024-12-10 21:45:16.357397] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:15.766 [2024-12-10 21:45:16.357497] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:15.766 spare 00:18:15.766 21:45:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.766 21:45:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:18:15.766 21:45:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.766 21:45:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:15.766 [2024-12-10 21:45:16.367104] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:15.766 [2024-12-10 21:45:16.369047] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:15.766 [2024-12-10 21:45:16.369159] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:15.766 [2024-12-10 21:45:16.369221] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:15.766 [2024-12-10 21:45:16.369346] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:15.766 [2024-12-10 21:45:16.369362] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:18:15.766 [2024-12-10 21:45:16.369633] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:18:15.766 [2024-12-10 21:45:16.377914] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:15.766 [2024-12-10 21:45:16.377935] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:15.766 [2024-12-10 21:45:16.378131] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:15.766 21:45:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.766 21:45:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:18:15.766 21:45:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:15.766 21:45:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:15.766 21:45:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:15.766 21:45:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:15.766 21:45:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:15.766 21:45:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:15.766 21:45:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:15.766 21:45:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:15.766 21:45:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:15.766 21:45:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:15.766 21:45:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.766 21:45:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:15.766 21:45:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:15.766 21:45:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.766 21:45:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:15.766 "name": "raid_bdev1", 00:18:15.766 "uuid": "abbb63de-919d-4528-a80b-460225d1a6a5", 00:18:15.766 "strip_size_kb": 64, 00:18:15.766 "state": "online", 00:18:15.766 "raid_level": "raid5f", 00:18:15.766 "superblock": false, 00:18:15.766 "num_base_bdevs": 4, 00:18:15.766 "num_base_bdevs_discovered": 4, 00:18:15.766 "num_base_bdevs_operational": 4, 00:18:15.766 "base_bdevs_list": [ 00:18:15.766 { 00:18:15.766 "name": "BaseBdev1", 00:18:15.766 "uuid": "cb7846e6-9a84-5799-b69e-2e4bee75711f", 00:18:15.766 "is_configured": true, 00:18:15.766 "data_offset": 0, 00:18:15.766 "data_size": 65536 00:18:15.766 }, 00:18:15.766 { 00:18:15.766 "name": "BaseBdev2", 00:18:15.766 "uuid": "c5efe8c1-7f5b-5b02-be5d-d08e14b1604d", 00:18:15.766 "is_configured": true, 00:18:15.766 "data_offset": 0, 00:18:15.766 "data_size": 65536 00:18:15.766 }, 00:18:15.766 { 00:18:15.766 "name": "BaseBdev3", 00:18:15.766 "uuid": "a55434fe-c6b7-5fbd-bac6-cd4bd216292f", 00:18:15.766 "is_configured": true, 00:18:15.766 "data_offset": 0, 00:18:15.766 "data_size": 65536 00:18:15.766 }, 00:18:15.766 { 00:18:15.766 "name": "BaseBdev4", 00:18:15.766 "uuid": "11357154-4eee-5ee2-a767-02f18363386b", 00:18:15.766 "is_configured": true, 00:18:15.766 "data_offset": 0, 00:18:15.766 "data_size": 65536 00:18:15.766 } 00:18:15.766 ] 00:18:15.766 }' 00:18:15.767 21:45:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:15.767 21:45:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:16.026 21:45:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:16.026 21:45:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.026 21:45:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:16.026 21:45:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:18:16.026 [2024-12-10 21:45:16.743314] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:16.026 21:45:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.026 21:45:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=196608 00:18:16.026 21:45:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:18:16.026 21:45:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:16.026 21:45:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.026 21:45:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:16.285 21:45:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.285 21:45:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:18:16.285 21:45:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:18:16.285 21:45:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:18:16.285 21:45:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:18:16.285 21:45:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:18:16.285 21:45:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:16.285 21:45:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:18:16.285 21:45:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:16.285 21:45:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:18:16.285 21:45:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:16.286 21:45:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:18:16.286 21:45:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:16.286 21:45:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:16.286 21:45:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:18:16.286 [2024-12-10 21:45:17.026646] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:18:16.286 /dev/nbd0 00:18:16.286 21:45:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:16.545 21:45:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:16.545 21:45:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:18:16.545 21:45:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:18:16.545 21:45:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:16.545 21:45:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:16.545 21:45:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:18:16.545 21:45:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:18:16.545 21:45:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:16.545 21:45:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:16.545 21:45:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:16.545 1+0 records in 00:18:16.545 1+0 records out 00:18:16.545 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000388002 s, 10.6 MB/s 00:18:16.545 21:45:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:16.545 21:45:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:18:16.545 21:45:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:16.545 21:45:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:16.545 21:45:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:18:16.545 21:45:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:16.545 21:45:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:16.545 21:45:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:18:16.545 21:45:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:18:16.545 21:45:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 192 00:18:16.545 21:45:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=512 oflag=direct 00:18:17.114 512+0 records in 00:18:17.114 512+0 records out 00:18:17.114 100663296 bytes (101 MB, 96 MiB) copied, 0.533456 s, 189 MB/s 00:18:17.114 21:45:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:18:17.114 21:45:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:17.114 21:45:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:18:17.114 21:45:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:17.114 21:45:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:18:17.114 21:45:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:17.114 21:45:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:17.114 [2024-12-10 21:45:17.819192] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:17.114 21:45:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:17.114 21:45:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:17.114 21:45:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:17.114 21:45:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:17.114 21:45:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:17.114 21:45:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:17.114 21:45:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:18:17.114 21:45:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:18:17.114 21:45:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:18:17.114 21:45:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.114 21:45:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:17.114 [2024-12-10 21:45:17.846144] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:17.114 21:45:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.114 21:45:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:18:17.114 21:45:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:17.114 21:45:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:17.114 21:45:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:17.114 21:45:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:17.114 21:45:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:17.114 21:45:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:17.114 21:45:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:17.114 21:45:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:17.114 21:45:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:17.114 21:45:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:17.114 21:45:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:17.114 21:45:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.114 21:45:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:17.114 21:45:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.374 21:45:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:17.374 "name": "raid_bdev1", 00:18:17.374 "uuid": "abbb63de-919d-4528-a80b-460225d1a6a5", 00:18:17.374 "strip_size_kb": 64, 00:18:17.374 "state": "online", 00:18:17.374 "raid_level": "raid5f", 00:18:17.374 "superblock": false, 00:18:17.374 "num_base_bdevs": 4, 00:18:17.374 "num_base_bdevs_discovered": 3, 00:18:17.374 "num_base_bdevs_operational": 3, 00:18:17.374 "base_bdevs_list": [ 00:18:17.374 { 00:18:17.374 "name": null, 00:18:17.374 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:17.374 "is_configured": false, 00:18:17.374 "data_offset": 0, 00:18:17.374 "data_size": 65536 00:18:17.374 }, 00:18:17.374 { 00:18:17.374 "name": "BaseBdev2", 00:18:17.374 "uuid": "c5efe8c1-7f5b-5b02-be5d-d08e14b1604d", 00:18:17.374 "is_configured": true, 00:18:17.374 "data_offset": 0, 00:18:17.374 "data_size": 65536 00:18:17.374 }, 00:18:17.374 { 00:18:17.374 "name": "BaseBdev3", 00:18:17.374 "uuid": "a55434fe-c6b7-5fbd-bac6-cd4bd216292f", 00:18:17.374 "is_configured": true, 00:18:17.374 "data_offset": 0, 00:18:17.374 "data_size": 65536 00:18:17.374 }, 00:18:17.374 { 00:18:17.374 "name": "BaseBdev4", 00:18:17.374 "uuid": "11357154-4eee-5ee2-a767-02f18363386b", 00:18:17.374 "is_configured": true, 00:18:17.374 "data_offset": 0, 00:18:17.374 "data_size": 65536 00:18:17.374 } 00:18:17.374 ] 00:18:17.374 }' 00:18:17.374 21:45:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:17.374 21:45:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:17.633 21:45:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:17.633 21:45:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.633 21:45:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:17.633 [2024-12-10 21:45:18.321357] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:17.633 [2024-12-10 21:45:18.337841] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b750 00:18:17.633 21:45:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.633 21:45:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:18:17.633 [2024-12-10 21:45:18.347146] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:18.571 21:45:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:18.571 21:45:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:18.571 21:45:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:18.571 21:45:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:18.571 21:45:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:18.571 21:45:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:18.571 21:45:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.571 21:45:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:18.571 21:45:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:18.833 21:45:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.833 21:45:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:18.833 "name": "raid_bdev1", 00:18:18.833 "uuid": "abbb63de-919d-4528-a80b-460225d1a6a5", 00:18:18.833 "strip_size_kb": 64, 00:18:18.833 "state": "online", 00:18:18.833 "raid_level": "raid5f", 00:18:18.833 "superblock": false, 00:18:18.833 "num_base_bdevs": 4, 00:18:18.833 "num_base_bdevs_discovered": 4, 00:18:18.833 "num_base_bdevs_operational": 4, 00:18:18.833 "process": { 00:18:18.833 "type": "rebuild", 00:18:18.833 "target": "spare", 00:18:18.833 "progress": { 00:18:18.833 "blocks": 19200, 00:18:18.833 "percent": 9 00:18:18.833 } 00:18:18.833 }, 00:18:18.833 "base_bdevs_list": [ 00:18:18.833 { 00:18:18.833 "name": "spare", 00:18:18.833 "uuid": "233482c3-daad-59d7-a1bd-5767fb934b41", 00:18:18.833 "is_configured": true, 00:18:18.833 "data_offset": 0, 00:18:18.833 "data_size": 65536 00:18:18.833 }, 00:18:18.833 { 00:18:18.833 "name": "BaseBdev2", 00:18:18.833 "uuid": "c5efe8c1-7f5b-5b02-be5d-d08e14b1604d", 00:18:18.833 "is_configured": true, 00:18:18.833 "data_offset": 0, 00:18:18.833 "data_size": 65536 00:18:18.833 }, 00:18:18.833 { 00:18:18.833 "name": "BaseBdev3", 00:18:18.833 "uuid": "a55434fe-c6b7-5fbd-bac6-cd4bd216292f", 00:18:18.833 "is_configured": true, 00:18:18.833 "data_offset": 0, 00:18:18.833 "data_size": 65536 00:18:18.833 }, 00:18:18.833 { 00:18:18.833 "name": "BaseBdev4", 00:18:18.833 "uuid": "11357154-4eee-5ee2-a767-02f18363386b", 00:18:18.833 "is_configured": true, 00:18:18.833 "data_offset": 0, 00:18:18.833 "data_size": 65536 00:18:18.833 } 00:18:18.833 ] 00:18:18.833 }' 00:18:18.833 21:45:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:18.833 21:45:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:18.833 21:45:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:18.833 21:45:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:18.833 21:45:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:18.833 21:45:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.833 21:45:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:18.833 [2024-12-10 21:45:19.490244] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:18.833 [2024-12-10 21:45:19.554274] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:18.833 [2024-12-10 21:45:19.554411] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:18.833 [2024-12-10 21:45:19.554459] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:18.833 [2024-12-10 21:45:19.554484] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:18.833 21:45:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.833 21:45:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:18:18.833 21:45:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:18.833 21:45:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:18.833 21:45:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:18.833 21:45:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:18.833 21:45:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:18.833 21:45:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:18.833 21:45:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:18.833 21:45:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:18.833 21:45:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:18.833 21:45:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:18.833 21:45:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:18.833 21:45:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.833 21:45:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:18.833 21:45:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.096 21:45:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:19.096 "name": "raid_bdev1", 00:18:19.096 "uuid": "abbb63de-919d-4528-a80b-460225d1a6a5", 00:18:19.096 "strip_size_kb": 64, 00:18:19.096 "state": "online", 00:18:19.096 "raid_level": "raid5f", 00:18:19.096 "superblock": false, 00:18:19.096 "num_base_bdevs": 4, 00:18:19.096 "num_base_bdevs_discovered": 3, 00:18:19.096 "num_base_bdevs_operational": 3, 00:18:19.096 "base_bdevs_list": [ 00:18:19.096 { 00:18:19.096 "name": null, 00:18:19.096 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:19.096 "is_configured": false, 00:18:19.096 "data_offset": 0, 00:18:19.096 "data_size": 65536 00:18:19.096 }, 00:18:19.096 { 00:18:19.096 "name": "BaseBdev2", 00:18:19.096 "uuid": "c5efe8c1-7f5b-5b02-be5d-d08e14b1604d", 00:18:19.096 "is_configured": true, 00:18:19.096 "data_offset": 0, 00:18:19.096 "data_size": 65536 00:18:19.096 }, 00:18:19.096 { 00:18:19.096 "name": "BaseBdev3", 00:18:19.096 "uuid": "a55434fe-c6b7-5fbd-bac6-cd4bd216292f", 00:18:19.096 "is_configured": true, 00:18:19.096 "data_offset": 0, 00:18:19.096 "data_size": 65536 00:18:19.096 }, 00:18:19.096 { 00:18:19.096 "name": "BaseBdev4", 00:18:19.096 "uuid": "11357154-4eee-5ee2-a767-02f18363386b", 00:18:19.096 "is_configured": true, 00:18:19.096 "data_offset": 0, 00:18:19.096 "data_size": 65536 00:18:19.096 } 00:18:19.096 ] 00:18:19.096 }' 00:18:19.096 21:45:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:19.096 21:45:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:19.355 21:45:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:19.355 21:45:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:19.355 21:45:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:19.355 21:45:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:19.355 21:45:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:19.355 21:45:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:19.355 21:45:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.355 21:45:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:19.355 21:45:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:19.355 21:45:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.355 21:45:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:19.355 "name": "raid_bdev1", 00:18:19.355 "uuid": "abbb63de-919d-4528-a80b-460225d1a6a5", 00:18:19.355 "strip_size_kb": 64, 00:18:19.355 "state": "online", 00:18:19.355 "raid_level": "raid5f", 00:18:19.355 "superblock": false, 00:18:19.355 "num_base_bdevs": 4, 00:18:19.355 "num_base_bdevs_discovered": 3, 00:18:19.355 "num_base_bdevs_operational": 3, 00:18:19.355 "base_bdevs_list": [ 00:18:19.355 { 00:18:19.355 "name": null, 00:18:19.355 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:19.355 "is_configured": false, 00:18:19.355 "data_offset": 0, 00:18:19.355 "data_size": 65536 00:18:19.355 }, 00:18:19.355 { 00:18:19.355 "name": "BaseBdev2", 00:18:19.355 "uuid": "c5efe8c1-7f5b-5b02-be5d-d08e14b1604d", 00:18:19.355 "is_configured": true, 00:18:19.355 "data_offset": 0, 00:18:19.355 "data_size": 65536 00:18:19.355 }, 00:18:19.355 { 00:18:19.355 "name": "BaseBdev3", 00:18:19.355 "uuid": "a55434fe-c6b7-5fbd-bac6-cd4bd216292f", 00:18:19.355 "is_configured": true, 00:18:19.355 "data_offset": 0, 00:18:19.355 "data_size": 65536 00:18:19.355 }, 00:18:19.355 { 00:18:19.355 "name": "BaseBdev4", 00:18:19.355 "uuid": "11357154-4eee-5ee2-a767-02f18363386b", 00:18:19.355 "is_configured": true, 00:18:19.355 "data_offset": 0, 00:18:19.355 "data_size": 65536 00:18:19.355 } 00:18:19.355 ] 00:18:19.355 }' 00:18:19.355 21:45:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:19.355 21:45:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:19.355 21:45:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:19.615 21:45:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:19.615 21:45:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:19.615 21:45:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.615 21:45:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:19.615 [2024-12-10 21:45:20.161182] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:19.615 [2024-12-10 21:45:20.175918] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b820 00:18:19.615 21:45:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.615 21:45:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:18:19.615 [2024-12-10 21:45:20.185309] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:20.552 21:45:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:20.552 21:45:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:20.552 21:45:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:20.552 21:45:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:20.552 21:45:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:20.552 21:45:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:20.552 21:45:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.552 21:45:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:20.552 21:45:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:20.552 21:45:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.552 21:45:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:20.552 "name": "raid_bdev1", 00:18:20.552 "uuid": "abbb63de-919d-4528-a80b-460225d1a6a5", 00:18:20.552 "strip_size_kb": 64, 00:18:20.552 "state": "online", 00:18:20.552 "raid_level": "raid5f", 00:18:20.552 "superblock": false, 00:18:20.552 "num_base_bdevs": 4, 00:18:20.552 "num_base_bdevs_discovered": 4, 00:18:20.552 "num_base_bdevs_operational": 4, 00:18:20.552 "process": { 00:18:20.552 "type": "rebuild", 00:18:20.552 "target": "spare", 00:18:20.552 "progress": { 00:18:20.552 "blocks": 19200, 00:18:20.552 "percent": 9 00:18:20.552 } 00:18:20.552 }, 00:18:20.552 "base_bdevs_list": [ 00:18:20.552 { 00:18:20.552 "name": "spare", 00:18:20.552 "uuid": "233482c3-daad-59d7-a1bd-5767fb934b41", 00:18:20.552 "is_configured": true, 00:18:20.552 "data_offset": 0, 00:18:20.552 "data_size": 65536 00:18:20.552 }, 00:18:20.552 { 00:18:20.552 "name": "BaseBdev2", 00:18:20.552 "uuid": "c5efe8c1-7f5b-5b02-be5d-d08e14b1604d", 00:18:20.552 "is_configured": true, 00:18:20.552 "data_offset": 0, 00:18:20.552 "data_size": 65536 00:18:20.552 }, 00:18:20.552 { 00:18:20.552 "name": "BaseBdev3", 00:18:20.552 "uuid": "a55434fe-c6b7-5fbd-bac6-cd4bd216292f", 00:18:20.552 "is_configured": true, 00:18:20.552 "data_offset": 0, 00:18:20.552 "data_size": 65536 00:18:20.552 }, 00:18:20.552 { 00:18:20.552 "name": "BaseBdev4", 00:18:20.552 "uuid": "11357154-4eee-5ee2-a767-02f18363386b", 00:18:20.552 "is_configured": true, 00:18:20.552 "data_offset": 0, 00:18:20.552 "data_size": 65536 00:18:20.552 } 00:18:20.552 ] 00:18:20.552 }' 00:18:20.552 21:45:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:20.552 21:45:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:20.552 21:45:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:20.552 21:45:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:20.552 21:45:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:18:20.552 21:45:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:18:20.552 21:45:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:18:20.552 21:45:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=633 00:18:20.552 21:45:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:20.552 21:45:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:20.552 21:45:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:20.552 21:45:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:20.552 21:45:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:20.552 21:45:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:20.552 21:45:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:20.552 21:45:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:20.552 21:45:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.552 21:45:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:20.810 21:45:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.810 21:45:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:20.810 "name": "raid_bdev1", 00:18:20.810 "uuid": "abbb63de-919d-4528-a80b-460225d1a6a5", 00:18:20.810 "strip_size_kb": 64, 00:18:20.810 "state": "online", 00:18:20.810 "raid_level": "raid5f", 00:18:20.810 "superblock": false, 00:18:20.810 "num_base_bdevs": 4, 00:18:20.810 "num_base_bdevs_discovered": 4, 00:18:20.810 "num_base_bdevs_operational": 4, 00:18:20.810 "process": { 00:18:20.810 "type": "rebuild", 00:18:20.810 "target": "spare", 00:18:20.810 "progress": { 00:18:20.810 "blocks": 21120, 00:18:20.810 "percent": 10 00:18:20.810 } 00:18:20.810 }, 00:18:20.810 "base_bdevs_list": [ 00:18:20.810 { 00:18:20.810 "name": "spare", 00:18:20.810 "uuid": "233482c3-daad-59d7-a1bd-5767fb934b41", 00:18:20.810 "is_configured": true, 00:18:20.810 "data_offset": 0, 00:18:20.810 "data_size": 65536 00:18:20.810 }, 00:18:20.810 { 00:18:20.810 "name": "BaseBdev2", 00:18:20.810 "uuid": "c5efe8c1-7f5b-5b02-be5d-d08e14b1604d", 00:18:20.810 "is_configured": true, 00:18:20.810 "data_offset": 0, 00:18:20.810 "data_size": 65536 00:18:20.810 }, 00:18:20.810 { 00:18:20.810 "name": "BaseBdev3", 00:18:20.810 "uuid": "a55434fe-c6b7-5fbd-bac6-cd4bd216292f", 00:18:20.810 "is_configured": true, 00:18:20.810 "data_offset": 0, 00:18:20.810 "data_size": 65536 00:18:20.810 }, 00:18:20.810 { 00:18:20.810 "name": "BaseBdev4", 00:18:20.810 "uuid": "11357154-4eee-5ee2-a767-02f18363386b", 00:18:20.810 "is_configured": true, 00:18:20.810 "data_offset": 0, 00:18:20.810 "data_size": 65536 00:18:20.810 } 00:18:20.810 ] 00:18:20.810 }' 00:18:20.810 21:45:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:20.810 21:45:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:20.810 21:45:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:20.810 21:45:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:20.810 21:45:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:21.745 21:45:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:21.745 21:45:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:21.745 21:45:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:21.745 21:45:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:21.745 21:45:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:21.745 21:45:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:21.745 21:45:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:21.745 21:45:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.745 21:45:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:21.745 21:45:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:21.745 21:45:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.745 21:45:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:21.745 "name": "raid_bdev1", 00:18:21.745 "uuid": "abbb63de-919d-4528-a80b-460225d1a6a5", 00:18:21.745 "strip_size_kb": 64, 00:18:21.745 "state": "online", 00:18:21.745 "raid_level": "raid5f", 00:18:21.745 "superblock": false, 00:18:21.745 "num_base_bdevs": 4, 00:18:21.745 "num_base_bdevs_discovered": 4, 00:18:21.745 "num_base_bdevs_operational": 4, 00:18:21.745 "process": { 00:18:21.745 "type": "rebuild", 00:18:21.745 "target": "spare", 00:18:21.745 "progress": { 00:18:21.745 "blocks": 42240, 00:18:21.745 "percent": 21 00:18:21.745 } 00:18:21.745 }, 00:18:21.745 "base_bdevs_list": [ 00:18:21.745 { 00:18:21.745 "name": "spare", 00:18:21.745 "uuid": "233482c3-daad-59d7-a1bd-5767fb934b41", 00:18:21.745 "is_configured": true, 00:18:21.745 "data_offset": 0, 00:18:21.745 "data_size": 65536 00:18:21.745 }, 00:18:21.745 { 00:18:21.745 "name": "BaseBdev2", 00:18:21.745 "uuid": "c5efe8c1-7f5b-5b02-be5d-d08e14b1604d", 00:18:21.745 "is_configured": true, 00:18:21.745 "data_offset": 0, 00:18:21.745 "data_size": 65536 00:18:21.745 }, 00:18:21.745 { 00:18:21.745 "name": "BaseBdev3", 00:18:21.745 "uuid": "a55434fe-c6b7-5fbd-bac6-cd4bd216292f", 00:18:21.745 "is_configured": true, 00:18:21.745 "data_offset": 0, 00:18:21.745 "data_size": 65536 00:18:21.745 }, 00:18:21.745 { 00:18:21.745 "name": "BaseBdev4", 00:18:21.745 "uuid": "11357154-4eee-5ee2-a767-02f18363386b", 00:18:21.745 "is_configured": true, 00:18:21.745 "data_offset": 0, 00:18:21.745 "data_size": 65536 00:18:21.745 } 00:18:21.745 ] 00:18:21.745 }' 00:18:21.745 21:45:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:22.004 21:45:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:22.004 21:45:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:22.004 21:45:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:22.004 21:45:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:22.939 21:45:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:22.939 21:45:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:22.939 21:45:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:22.939 21:45:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:22.939 21:45:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:22.939 21:45:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:22.939 21:45:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:22.939 21:45:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:22.939 21:45:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.939 21:45:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:22.939 21:45:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.939 21:45:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:22.939 "name": "raid_bdev1", 00:18:22.939 "uuid": "abbb63de-919d-4528-a80b-460225d1a6a5", 00:18:22.939 "strip_size_kb": 64, 00:18:22.939 "state": "online", 00:18:22.939 "raid_level": "raid5f", 00:18:22.939 "superblock": false, 00:18:22.939 "num_base_bdevs": 4, 00:18:22.939 "num_base_bdevs_discovered": 4, 00:18:22.939 "num_base_bdevs_operational": 4, 00:18:22.939 "process": { 00:18:22.939 "type": "rebuild", 00:18:22.939 "target": "spare", 00:18:22.939 "progress": { 00:18:22.939 "blocks": 63360, 00:18:22.939 "percent": 32 00:18:22.939 } 00:18:22.939 }, 00:18:22.939 "base_bdevs_list": [ 00:18:22.939 { 00:18:22.939 "name": "spare", 00:18:22.939 "uuid": "233482c3-daad-59d7-a1bd-5767fb934b41", 00:18:22.939 "is_configured": true, 00:18:22.939 "data_offset": 0, 00:18:22.939 "data_size": 65536 00:18:22.939 }, 00:18:22.939 { 00:18:22.939 "name": "BaseBdev2", 00:18:22.939 "uuid": "c5efe8c1-7f5b-5b02-be5d-d08e14b1604d", 00:18:22.939 "is_configured": true, 00:18:22.939 "data_offset": 0, 00:18:22.939 "data_size": 65536 00:18:22.939 }, 00:18:22.939 { 00:18:22.939 "name": "BaseBdev3", 00:18:22.939 "uuid": "a55434fe-c6b7-5fbd-bac6-cd4bd216292f", 00:18:22.939 "is_configured": true, 00:18:22.939 "data_offset": 0, 00:18:22.939 "data_size": 65536 00:18:22.939 }, 00:18:22.939 { 00:18:22.939 "name": "BaseBdev4", 00:18:22.939 "uuid": "11357154-4eee-5ee2-a767-02f18363386b", 00:18:22.939 "is_configured": true, 00:18:22.939 "data_offset": 0, 00:18:22.939 "data_size": 65536 00:18:22.939 } 00:18:22.939 ] 00:18:22.939 }' 00:18:22.939 21:45:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:22.939 21:45:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:22.939 21:45:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:23.198 21:45:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:23.198 21:45:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:24.134 21:45:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:24.134 21:45:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:24.134 21:45:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:24.134 21:45:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:24.134 21:45:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:24.134 21:45:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:24.134 21:45:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:24.134 21:45:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.134 21:45:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:24.134 21:45:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:24.134 21:45:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.134 21:45:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:24.134 "name": "raid_bdev1", 00:18:24.134 "uuid": "abbb63de-919d-4528-a80b-460225d1a6a5", 00:18:24.134 "strip_size_kb": 64, 00:18:24.134 "state": "online", 00:18:24.134 "raid_level": "raid5f", 00:18:24.134 "superblock": false, 00:18:24.134 "num_base_bdevs": 4, 00:18:24.134 "num_base_bdevs_discovered": 4, 00:18:24.134 "num_base_bdevs_operational": 4, 00:18:24.134 "process": { 00:18:24.134 "type": "rebuild", 00:18:24.134 "target": "spare", 00:18:24.134 "progress": { 00:18:24.134 "blocks": 86400, 00:18:24.134 "percent": 43 00:18:24.134 } 00:18:24.134 }, 00:18:24.134 "base_bdevs_list": [ 00:18:24.134 { 00:18:24.134 "name": "spare", 00:18:24.134 "uuid": "233482c3-daad-59d7-a1bd-5767fb934b41", 00:18:24.134 "is_configured": true, 00:18:24.134 "data_offset": 0, 00:18:24.134 "data_size": 65536 00:18:24.134 }, 00:18:24.134 { 00:18:24.134 "name": "BaseBdev2", 00:18:24.134 "uuid": "c5efe8c1-7f5b-5b02-be5d-d08e14b1604d", 00:18:24.134 "is_configured": true, 00:18:24.134 "data_offset": 0, 00:18:24.134 "data_size": 65536 00:18:24.135 }, 00:18:24.135 { 00:18:24.135 "name": "BaseBdev3", 00:18:24.135 "uuid": "a55434fe-c6b7-5fbd-bac6-cd4bd216292f", 00:18:24.135 "is_configured": true, 00:18:24.135 "data_offset": 0, 00:18:24.135 "data_size": 65536 00:18:24.135 }, 00:18:24.135 { 00:18:24.135 "name": "BaseBdev4", 00:18:24.135 "uuid": "11357154-4eee-5ee2-a767-02f18363386b", 00:18:24.135 "is_configured": true, 00:18:24.135 "data_offset": 0, 00:18:24.135 "data_size": 65536 00:18:24.135 } 00:18:24.135 ] 00:18:24.135 }' 00:18:24.135 21:45:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:24.135 21:45:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:24.135 21:45:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:24.135 21:45:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:24.135 21:45:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:25.516 21:45:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:25.516 21:45:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:25.516 21:45:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:25.516 21:45:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:25.516 21:45:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:25.516 21:45:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:25.516 21:45:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:25.516 21:45:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.516 21:45:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:25.516 21:45:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:25.516 21:45:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.516 21:45:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:25.516 "name": "raid_bdev1", 00:18:25.516 "uuid": "abbb63de-919d-4528-a80b-460225d1a6a5", 00:18:25.516 "strip_size_kb": 64, 00:18:25.516 "state": "online", 00:18:25.516 "raid_level": "raid5f", 00:18:25.516 "superblock": false, 00:18:25.516 "num_base_bdevs": 4, 00:18:25.516 "num_base_bdevs_discovered": 4, 00:18:25.516 "num_base_bdevs_operational": 4, 00:18:25.516 "process": { 00:18:25.516 "type": "rebuild", 00:18:25.516 "target": "spare", 00:18:25.516 "progress": { 00:18:25.516 "blocks": 107520, 00:18:25.516 "percent": 54 00:18:25.516 } 00:18:25.516 }, 00:18:25.516 "base_bdevs_list": [ 00:18:25.516 { 00:18:25.516 "name": "spare", 00:18:25.516 "uuid": "233482c3-daad-59d7-a1bd-5767fb934b41", 00:18:25.516 "is_configured": true, 00:18:25.516 "data_offset": 0, 00:18:25.516 "data_size": 65536 00:18:25.516 }, 00:18:25.516 { 00:18:25.516 "name": "BaseBdev2", 00:18:25.516 "uuid": "c5efe8c1-7f5b-5b02-be5d-d08e14b1604d", 00:18:25.516 "is_configured": true, 00:18:25.516 "data_offset": 0, 00:18:25.516 "data_size": 65536 00:18:25.516 }, 00:18:25.516 { 00:18:25.516 "name": "BaseBdev3", 00:18:25.516 "uuid": "a55434fe-c6b7-5fbd-bac6-cd4bd216292f", 00:18:25.516 "is_configured": true, 00:18:25.516 "data_offset": 0, 00:18:25.516 "data_size": 65536 00:18:25.516 }, 00:18:25.516 { 00:18:25.516 "name": "BaseBdev4", 00:18:25.516 "uuid": "11357154-4eee-5ee2-a767-02f18363386b", 00:18:25.516 "is_configured": true, 00:18:25.516 "data_offset": 0, 00:18:25.516 "data_size": 65536 00:18:25.516 } 00:18:25.516 ] 00:18:25.516 }' 00:18:25.516 21:45:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:25.516 21:45:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:25.516 21:45:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:25.516 21:45:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:25.516 21:45:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:26.454 21:45:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:26.454 21:45:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:26.454 21:45:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:26.454 21:45:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:26.454 21:45:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:26.454 21:45:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:26.454 21:45:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:26.454 21:45:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.454 21:45:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:26.454 21:45:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:26.454 21:45:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.454 21:45:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:26.454 "name": "raid_bdev1", 00:18:26.454 "uuid": "abbb63de-919d-4528-a80b-460225d1a6a5", 00:18:26.454 "strip_size_kb": 64, 00:18:26.454 "state": "online", 00:18:26.454 "raid_level": "raid5f", 00:18:26.454 "superblock": false, 00:18:26.454 "num_base_bdevs": 4, 00:18:26.454 "num_base_bdevs_discovered": 4, 00:18:26.454 "num_base_bdevs_operational": 4, 00:18:26.454 "process": { 00:18:26.454 "type": "rebuild", 00:18:26.454 "target": "spare", 00:18:26.454 "progress": { 00:18:26.454 "blocks": 130560, 00:18:26.454 "percent": 66 00:18:26.454 } 00:18:26.454 }, 00:18:26.454 "base_bdevs_list": [ 00:18:26.454 { 00:18:26.454 "name": "spare", 00:18:26.454 "uuid": "233482c3-daad-59d7-a1bd-5767fb934b41", 00:18:26.454 "is_configured": true, 00:18:26.454 "data_offset": 0, 00:18:26.454 "data_size": 65536 00:18:26.454 }, 00:18:26.454 { 00:18:26.454 "name": "BaseBdev2", 00:18:26.454 "uuid": "c5efe8c1-7f5b-5b02-be5d-d08e14b1604d", 00:18:26.454 "is_configured": true, 00:18:26.454 "data_offset": 0, 00:18:26.454 "data_size": 65536 00:18:26.454 }, 00:18:26.454 { 00:18:26.454 "name": "BaseBdev3", 00:18:26.454 "uuid": "a55434fe-c6b7-5fbd-bac6-cd4bd216292f", 00:18:26.454 "is_configured": true, 00:18:26.454 "data_offset": 0, 00:18:26.454 "data_size": 65536 00:18:26.454 }, 00:18:26.454 { 00:18:26.454 "name": "BaseBdev4", 00:18:26.454 "uuid": "11357154-4eee-5ee2-a767-02f18363386b", 00:18:26.454 "is_configured": true, 00:18:26.454 "data_offset": 0, 00:18:26.454 "data_size": 65536 00:18:26.454 } 00:18:26.454 ] 00:18:26.454 }' 00:18:26.454 21:45:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:26.454 21:45:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:26.454 21:45:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:26.454 21:45:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:26.454 21:45:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:27.393 21:45:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:27.393 21:45:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:27.393 21:45:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:27.393 21:45:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:27.393 21:45:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:27.393 21:45:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:27.393 21:45:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:27.393 21:45:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.393 21:45:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:27.393 21:45:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:27.651 21:45:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.651 21:45:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:27.651 "name": "raid_bdev1", 00:18:27.651 "uuid": "abbb63de-919d-4528-a80b-460225d1a6a5", 00:18:27.651 "strip_size_kb": 64, 00:18:27.651 "state": "online", 00:18:27.651 "raid_level": "raid5f", 00:18:27.651 "superblock": false, 00:18:27.651 "num_base_bdevs": 4, 00:18:27.651 "num_base_bdevs_discovered": 4, 00:18:27.651 "num_base_bdevs_operational": 4, 00:18:27.651 "process": { 00:18:27.651 "type": "rebuild", 00:18:27.651 "target": "spare", 00:18:27.651 "progress": { 00:18:27.651 "blocks": 151680, 00:18:27.651 "percent": 77 00:18:27.651 } 00:18:27.651 }, 00:18:27.651 "base_bdevs_list": [ 00:18:27.651 { 00:18:27.651 "name": "spare", 00:18:27.651 "uuid": "233482c3-daad-59d7-a1bd-5767fb934b41", 00:18:27.651 "is_configured": true, 00:18:27.651 "data_offset": 0, 00:18:27.651 "data_size": 65536 00:18:27.651 }, 00:18:27.651 { 00:18:27.651 "name": "BaseBdev2", 00:18:27.651 "uuid": "c5efe8c1-7f5b-5b02-be5d-d08e14b1604d", 00:18:27.651 "is_configured": true, 00:18:27.651 "data_offset": 0, 00:18:27.652 "data_size": 65536 00:18:27.652 }, 00:18:27.652 { 00:18:27.652 "name": "BaseBdev3", 00:18:27.652 "uuid": "a55434fe-c6b7-5fbd-bac6-cd4bd216292f", 00:18:27.652 "is_configured": true, 00:18:27.652 "data_offset": 0, 00:18:27.652 "data_size": 65536 00:18:27.652 }, 00:18:27.652 { 00:18:27.652 "name": "BaseBdev4", 00:18:27.652 "uuid": "11357154-4eee-5ee2-a767-02f18363386b", 00:18:27.652 "is_configured": true, 00:18:27.652 "data_offset": 0, 00:18:27.652 "data_size": 65536 00:18:27.652 } 00:18:27.652 ] 00:18:27.652 }' 00:18:27.652 21:45:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:27.652 21:45:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:27.652 21:45:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:27.652 21:45:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:27.652 21:45:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:28.589 21:45:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:28.589 21:45:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:28.589 21:45:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:28.589 21:45:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:28.589 21:45:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:28.589 21:45:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:28.589 21:45:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:28.589 21:45:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.589 21:45:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:28.589 21:45:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:28.589 21:45:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.589 21:45:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:28.589 "name": "raid_bdev1", 00:18:28.589 "uuid": "abbb63de-919d-4528-a80b-460225d1a6a5", 00:18:28.589 "strip_size_kb": 64, 00:18:28.589 "state": "online", 00:18:28.589 "raid_level": "raid5f", 00:18:28.589 "superblock": false, 00:18:28.589 "num_base_bdevs": 4, 00:18:28.589 "num_base_bdevs_discovered": 4, 00:18:28.589 "num_base_bdevs_operational": 4, 00:18:28.589 "process": { 00:18:28.589 "type": "rebuild", 00:18:28.589 "target": "spare", 00:18:28.589 "progress": { 00:18:28.589 "blocks": 172800, 00:18:28.589 "percent": 87 00:18:28.589 } 00:18:28.589 }, 00:18:28.589 "base_bdevs_list": [ 00:18:28.589 { 00:18:28.589 "name": "spare", 00:18:28.589 "uuid": "233482c3-daad-59d7-a1bd-5767fb934b41", 00:18:28.589 "is_configured": true, 00:18:28.589 "data_offset": 0, 00:18:28.589 "data_size": 65536 00:18:28.589 }, 00:18:28.589 { 00:18:28.589 "name": "BaseBdev2", 00:18:28.589 "uuid": "c5efe8c1-7f5b-5b02-be5d-d08e14b1604d", 00:18:28.589 "is_configured": true, 00:18:28.589 "data_offset": 0, 00:18:28.589 "data_size": 65536 00:18:28.589 }, 00:18:28.589 { 00:18:28.589 "name": "BaseBdev3", 00:18:28.589 "uuid": "a55434fe-c6b7-5fbd-bac6-cd4bd216292f", 00:18:28.589 "is_configured": true, 00:18:28.589 "data_offset": 0, 00:18:28.589 "data_size": 65536 00:18:28.589 }, 00:18:28.589 { 00:18:28.589 "name": "BaseBdev4", 00:18:28.589 "uuid": "11357154-4eee-5ee2-a767-02f18363386b", 00:18:28.589 "is_configured": true, 00:18:28.589 "data_offset": 0, 00:18:28.589 "data_size": 65536 00:18:28.589 } 00:18:28.589 ] 00:18:28.589 }' 00:18:28.589 21:45:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:28.848 21:45:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:28.848 21:45:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:28.848 21:45:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:28.848 21:45:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:29.786 21:45:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:29.786 21:45:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:29.786 21:45:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:29.786 21:45:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:29.786 21:45:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:29.786 21:45:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:29.786 21:45:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:29.786 21:45:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:29.786 21:45:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.786 21:45:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:29.786 21:45:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.786 21:45:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:29.786 "name": "raid_bdev1", 00:18:29.786 "uuid": "abbb63de-919d-4528-a80b-460225d1a6a5", 00:18:29.786 "strip_size_kb": 64, 00:18:29.786 "state": "online", 00:18:29.786 "raid_level": "raid5f", 00:18:29.786 "superblock": false, 00:18:29.786 "num_base_bdevs": 4, 00:18:29.786 "num_base_bdevs_discovered": 4, 00:18:29.786 "num_base_bdevs_operational": 4, 00:18:29.786 "process": { 00:18:29.786 "type": "rebuild", 00:18:29.786 "target": "spare", 00:18:29.786 "progress": { 00:18:29.786 "blocks": 195840, 00:18:29.786 "percent": 99 00:18:29.786 } 00:18:29.786 }, 00:18:29.786 "base_bdevs_list": [ 00:18:29.786 { 00:18:29.786 "name": "spare", 00:18:29.786 "uuid": "233482c3-daad-59d7-a1bd-5767fb934b41", 00:18:29.786 "is_configured": true, 00:18:29.786 "data_offset": 0, 00:18:29.786 "data_size": 65536 00:18:29.786 }, 00:18:29.786 { 00:18:29.786 "name": "BaseBdev2", 00:18:29.786 "uuid": "c5efe8c1-7f5b-5b02-be5d-d08e14b1604d", 00:18:29.786 "is_configured": true, 00:18:29.786 "data_offset": 0, 00:18:29.786 "data_size": 65536 00:18:29.786 }, 00:18:29.786 { 00:18:29.786 "name": "BaseBdev3", 00:18:29.786 "uuid": "a55434fe-c6b7-5fbd-bac6-cd4bd216292f", 00:18:29.786 "is_configured": true, 00:18:29.786 "data_offset": 0, 00:18:29.786 "data_size": 65536 00:18:29.786 }, 00:18:29.786 { 00:18:29.786 "name": "BaseBdev4", 00:18:29.786 "uuid": "11357154-4eee-5ee2-a767-02f18363386b", 00:18:29.786 "is_configured": true, 00:18:29.786 "data_offset": 0, 00:18:29.786 "data_size": 65536 00:18:29.786 } 00:18:29.786 ] 00:18:29.786 }' 00:18:29.786 21:45:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:29.786 [2024-12-10 21:45:30.539860] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:18:29.786 [2024-12-10 21:45:30.539989] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:18:29.786 [2024-12-10 21:45:30.540070] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:29.786 21:45:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:29.786 21:45:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:30.045 21:45:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:30.045 21:45:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:30.998 21:45:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:30.998 21:45:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:30.998 21:45:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:30.998 21:45:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:30.998 21:45:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:30.998 21:45:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:30.998 21:45:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:30.998 21:45:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:30.998 21:45:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.998 21:45:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:30.998 21:45:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.998 21:45:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:30.998 "name": "raid_bdev1", 00:18:30.998 "uuid": "abbb63de-919d-4528-a80b-460225d1a6a5", 00:18:30.998 "strip_size_kb": 64, 00:18:30.998 "state": "online", 00:18:30.998 "raid_level": "raid5f", 00:18:30.998 "superblock": false, 00:18:30.998 "num_base_bdevs": 4, 00:18:30.998 "num_base_bdevs_discovered": 4, 00:18:30.998 "num_base_bdevs_operational": 4, 00:18:30.998 "base_bdevs_list": [ 00:18:30.998 { 00:18:30.998 "name": "spare", 00:18:30.998 "uuid": "233482c3-daad-59d7-a1bd-5767fb934b41", 00:18:30.998 "is_configured": true, 00:18:30.998 "data_offset": 0, 00:18:30.998 "data_size": 65536 00:18:30.998 }, 00:18:30.998 { 00:18:30.998 "name": "BaseBdev2", 00:18:30.998 "uuid": "c5efe8c1-7f5b-5b02-be5d-d08e14b1604d", 00:18:30.998 "is_configured": true, 00:18:30.998 "data_offset": 0, 00:18:30.998 "data_size": 65536 00:18:30.998 }, 00:18:30.998 { 00:18:30.998 "name": "BaseBdev3", 00:18:30.998 "uuid": "a55434fe-c6b7-5fbd-bac6-cd4bd216292f", 00:18:30.998 "is_configured": true, 00:18:30.998 "data_offset": 0, 00:18:30.998 "data_size": 65536 00:18:30.998 }, 00:18:30.998 { 00:18:30.998 "name": "BaseBdev4", 00:18:30.998 "uuid": "11357154-4eee-5ee2-a767-02f18363386b", 00:18:30.998 "is_configured": true, 00:18:30.998 "data_offset": 0, 00:18:30.998 "data_size": 65536 00:18:30.998 } 00:18:30.998 ] 00:18:30.998 }' 00:18:30.998 21:45:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:30.998 21:45:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:18:30.998 21:45:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:30.998 21:45:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:18:30.998 21:45:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:18:30.998 21:45:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:30.998 21:45:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:30.998 21:45:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:30.998 21:45:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:30.998 21:45:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:30.998 21:45:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:30.998 21:45:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:30.998 21:45:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.998 21:45:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:30.998 21:45:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.998 21:45:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:30.998 "name": "raid_bdev1", 00:18:30.998 "uuid": "abbb63de-919d-4528-a80b-460225d1a6a5", 00:18:30.998 "strip_size_kb": 64, 00:18:30.998 "state": "online", 00:18:30.998 "raid_level": "raid5f", 00:18:30.998 "superblock": false, 00:18:30.998 "num_base_bdevs": 4, 00:18:30.998 "num_base_bdevs_discovered": 4, 00:18:30.998 "num_base_bdevs_operational": 4, 00:18:30.998 "base_bdevs_list": [ 00:18:30.998 { 00:18:30.998 "name": "spare", 00:18:30.998 "uuid": "233482c3-daad-59d7-a1bd-5767fb934b41", 00:18:30.998 "is_configured": true, 00:18:30.998 "data_offset": 0, 00:18:30.998 "data_size": 65536 00:18:30.999 }, 00:18:30.999 { 00:18:30.999 "name": "BaseBdev2", 00:18:30.999 "uuid": "c5efe8c1-7f5b-5b02-be5d-d08e14b1604d", 00:18:30.999 "is_configured": true, 00:18:30.999 "data_offset": 0, 00:18:30.999 "data_size": 65536 00:18:30.999 }, 00:18:30.999 { 00:18:30.999 "name": "BaseBdev3", 00:18:30.999 "uuid": "a55434fe-c6b7-5fbd-bac6-cd4bd216292f", 00:18:30.999 "is_configured": true, 00:18:30.999 "data_offset": 0, 00:18:30.999 "data_size": 65536 00:18:30.999 }, 00:18:30.999 { 00:18:30.999 "name": "BaseBdev4", 00:18:30.999 "uuid": "11357154-4eee-5ee2-a767-02f18363386b", 00:18:30.999 "is_configured": true, 00:18:30.999 "data_offset": 0, 00:18:30.999 "data_size": 65536 00:18:30.999 } 00:18:30.999 ] 00:18:30.999 }' 00:18:30.999 21:45:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:31.258 21:45:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:31.258 21:45:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:31.258 21:45:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:31.258 21:45:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:18:31.258 21:45:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:31.258 21:45:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:31.258 21:45:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:31.258 21:45:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:31.258 21:45:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:31.258 21:45:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:31.258 21:45:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:31.258 21:45:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:31.258 21:45:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:31.258 21:45:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:31.258 21:45:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.258 21:45:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:31.258 21:45:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:31.258 21:45:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.258 21:45:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:31.258 "name": "raid_bdev1", 00:18:31.258 "uuid": "abbb63de-919d-4528-a80b-460225d1a6a5", 00:18:31.258 "strip_size_kb": 64, 00:18:31.258 "state": "online", 00:18:31.258 "raid_level": "raid5f", 00:18:31.258 "superblock": false, 00:18:31.258 "num_base_bdevs": 4, 00:18:31.258 "num_base_bdevs_discovered": 4, 00:18:31.258 "num_base_bdevs_operational": 4, 00:18:31.258 "base_bdevs_list": [ 00:18:31.258 { 00:18:31.258 "name": "spare", 00:18:31.258 "uuid": "233482c3-daad-59d7-a1bd-5767fb934b41", 00:18:31.258 "is_configured": true, 00:18:31.258 "data_offset": 0, 00:18:31.258 "data_size": 65536 00:18:31.258 }, 00:18:31.258 { 00:18:31.258 "name": "BaseBdev2", 00:18:31.258 "uuid": "c5efe8c1-7f5b-5b02-be5d-d08e14b1604d", 00:18:31.258 "is_configured": true, 00:18:31.258 "data_offset": 0, 00:18:31.258 "data_size": 65536 00:18:31.258 }, 00:18:31.258 { 00:18:31.258 "name": "BaseBdev3", 00:18:31.258 "uuid": "a55434fe-c6b7-5fbd-bac6-cd4bd216292f", 00:18:31.258 "is_configured": true, 00:18:31.258 "data_offset": 0, 00:18:31.258 "data_size": 65536 00:18:31.258 }, 00:18:31.258 { 00:18:31.258 "name": "BaseBdev4", 00:18:31.258 "uuid": "11357154-4eee-5ee2-a767-02f18363386b", 00:18:31.258 "is_configured": true, 00:18:31.258 "data_offset": 0, 00:18:31.258 "data_size": 65536 00:18:31.258 } 00:18:31.258 ] 00:18:31.258 }' 00:18:31.258 21:45:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:31.258 21:45:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:31.518 21:45:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:31.518 21:45:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.518 21:45:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:31.518 [2024-12-10 21:45:32.256196] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:31.518 [2024-12-10 21:45:32.256228] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:31.518 [2024-12-10 21:45:32.256313] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:31.518 [2024-12-10 21:45:32.256406] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:31.518 [2024-12-10 21:45:32.256434] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:31.518 21:45:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.518 21:45:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:18:31.518 21:45:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:31.518 21:45:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.518 21:45:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:31.518 21:45:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.778 21:45:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:18:31.778 21:45:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:18:31.778 21:45:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:18:31.778 21:45:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:18:31.778 21:45:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:31.778 21:45:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:18:31.778 21:45:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:31.778 21:45:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:31.778 21:45:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:31.778 21:45:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:18:31.778 21:45:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:31.778 21:45:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:31.778 21:45:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:18:31.778 /dev/nbd0 00:18:31.778 21:45:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:31.778 21:45:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:31.778 21:45:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:18:31.778 21:45:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:18:31.778 21:45:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:31.778 21:45:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:31.778 21:45:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:18:31.778 21:45:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:18:31.778 21:45:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:31.778 21:45:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:31.778 21:45:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:31.778 1+0 records in 00:18:31.778 1+0 records out 00:18:31.778 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000332968 s, 12.3 MB/s 00:18:31.778 21:45:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:31.778 21:45:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:18:31.778 21:45:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:31.778 21:45:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:31.778 21:45:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:18:31.778 21:45:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:31.778 21:45:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:31.778 21:45:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:18:32.038 /dev/nbd1 00:18:32.038 21:45:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:18:32.038 21:45:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:18:32.038 21:45:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:18:32.038 21:45:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:18:32.038 21:45:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:32.038 21:45:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:32.038 21:45:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:18:32.038 21:45:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:18:32.038 21:45:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:32.038 21:45:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:32.038 21:45:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:32.038 1+0 records in 00:18:32.038 1+0 records out 00:18:32.038 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000390264 s, 10.5 MB/s 00:18:32.038 21:45:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:32.038 21:45:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:18:32.038 21:45:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:32.038 21:45:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:32.038 21:45:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:18:32.038 21:45:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:32.038 21:45:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:32.038 21:45:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:18:32.298 21:45:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:18:32.298 21:45:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:32.298 21:45:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:32.298 21:45:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:32.298 21:45:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:18:32.298 21:45:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:32.298 21:45:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:32.557 21:45:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:32.557 21:45:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:32.557 21:45:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:32.557 21:45:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:32.557 21:45:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:32.557 21:45:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:32.557 21:45:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:18:32.557 21:45:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:18:32.557 21:45:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:32.557 21:45:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:18:32.817 21:45:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:18:32.817 21:45:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:18:32.817 21:45:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:18:32.817 21:45:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:32.817 21:45:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:32.817 21:45:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:18:32.817 21:45:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:18:32.817 21:45:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:18:32.817 21:45:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:18:32.817 21:45:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 84782 00:18:32.818 21:45:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 84782 ']' 00:18:32.818 21:45:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 84782 00:18:32.818 21:45:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:18:32.818 21:45:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:32.818 21:45:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84782 00:18:32.818 killing process with pid 84782 00:18:32.818 Received shutdown signal, test time was about 60.000000 seconds 00:18:32.818 00:18:32.818 Latency(us) 00:18:32.818 [2024-12-10T21:45:33.601Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:32.818 [2024-12-10T21:45:33.601Z] =================================================================================================================== 00:18:32.818 [2024-12-10T21:45:33.601Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:32.818 21:45:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:32.818 21:45:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:32.818 21:45:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84782' 00:18:32.818 21:45:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@973 -- # kill 84782 00:18:32.818 [2024-12-10 21:45:33.442611] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:32.818 21:45:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@978 -- # wait 84782 00:18:33.386 [2024-12-10 21:45:33.906607] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:34.396 21:45:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:18:34.396 00:18:34.396 real 0m19.851s 00:18:34.396 user 0m23.681s 00:18:34.396 sys 0m2.118s 00:18:34.396 21:45:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:34.396 21:45:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:34.396 ************************************ 00:18:34.396 END TEST raid5f_rebuild_test 00:18:34.396 ************************************ 00:18:34.396 21:45:35 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 4 true false true 00:18:34.396 21:45:35 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:18:34.396 21:45:35 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:34.396 21:45:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:34.396 ************************************ 00:18:34.396 START TEST raid5f_rebuild_test_sb 00:18:34.396 ************************************ 00:18:34.396 21:45:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 4 true false true 00:18:34.396 21:45:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:18:34.396 21:45:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:18:34.396 21:45:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:18:34.396 21:45:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:18:34.396 21:45:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:18:34.396 21:45:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:18:34.396 21:45:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:34.396 21:45:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:18:34.396 21:45:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:34.396 21:45:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:34.396 21:45:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:18:34.396 21:45:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:34.396 21:45:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:34.396 21:45:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:18:34.396 21:45:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:34.396 21:45:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:34.396 21:45:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:18:34.396 21:45:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:34.396 21:45:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:34.396 21:45:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:18:34.396 21:45:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:18:34.396 21:45:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:18:34.396 21:45:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:18:34.396 21:45:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:18:34.396 21:45:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:18:34.396 21:45:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:18:34.396 21:45:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:18:34.396 21:45:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:18:34.396 21:45:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:18:34.396 21:45:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:18:34.396 21:45:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:18:34.396 21:45:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:18:34.396 21:45:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=85294 00:18:34.396 21:45:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:18:34.396 21:45:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 85294 00:18:34.396 21:45:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 85294 ']' 00:18:34.396 21:45:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:34.396 21:45:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:34.396 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:34.396 21:45:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:34.396 21:45:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:34.396 21:45:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:34.396 [2024-12-10 21:45:35.152362] Starting SPDK v25.01-pre git sha1 cec5ba284 / DPDK 24.03.0 initialization... 00:18:34.396 I/O size of 3145728 is greater than zero copy threshold (65536). 00:18:34.396 Zero copy mechanism will not be used. 00:18:34.396 [2024-12-10 21:45:35.152582] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85294 ] 00:18:34.659 [2024-12-10 21:45:35.323791] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:34.659 [2024-12-10 21:45:35.423202] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:18:34.920 [2024-12-10 21:45:35.610084] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:34.920 [2024-12-10 21:45:35.610115] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:35.180 21:45:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:35.180 21:45:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:18:35.180 21:45:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:35.180 21:45:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:18:35.180 21:45:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.180 21:45:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:35.440 BaseBdev1_malloc 00:18:35.440 21:45:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.440 21:45:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:35.440 21:45:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.440 21:45:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:35.440 [2024-12-10 21:45:36.005972] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:35.440 [2024-12-10 21:45:36.006046] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:35.441 [2024-12-10 21:45:36.006068] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:35.441 [2024-12-10 21:45:36.006079] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:35.441 [2024-12-10 21:45:36.008076] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:35.441 [2024-12-10 21:45:36.008116] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:35.441 BaseBdev1 00:18:35.441 21:45:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.441 21:45:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:35.441 21:45:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:18:35.441 21:45:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.441 21:45:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:35.441 BaseBdev2_malloc 00:18:35.441 21:45:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.441 21:45:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:18:35.441 21:45:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.441 21:45:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:35.441 [2024-12-10 21:45:36.059624] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:18:35.441 [2024-12-10 21:45:36.059747] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:35.441 [2024-12-10 21:45:36.059770] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:35.441 [2024-12-10 21:45:36.059780] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:35.441 [2024-12-10 21:45:36.061994] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:35.441 [2024-12-10 21:45:36.062032] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:35.441 BaseBdev2 00:18:35.441 21:45:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.441 21:45:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:35.441 21:45:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:18:35.441 21:45:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.441 21:45:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:35.441 BaseBdev3_malloc 00:18:35.441 21:45:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.441 21:45:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:18:35.441 21:45:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.441 21:45:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:35.441 [2024-12-10 21:45:36.149183] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:18:35.441 [2024-12-10 21:45:36.149235] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:35.441 [2024-12-10 21:45:36.149265] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:18:35.441 [2024-12-10 21:45:36.149291] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:35.441 [2024-12-10 21:45:36.151265] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:35.441 [2024-12-10 21:45:36.151380] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:18:35.441 BaseBdev3 00:18:35.441 21:45:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.441 21:45:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:35.441 21:45:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:18:35.441 21:45:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.441 21:45:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:35.441 BaseBdev4_malloc 00:18:35.441 21:45:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.441 21:45:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:18:35.441 21:45:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.441 21:45:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:35.441 [2024-12-10 21:45:36.199619] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:18:35.441 [2024-12-10 21:45:36.199672] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:35.441 [2024-12-10 21:45:36.199691] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:18:35.441 [2024-12-10 21:45:36.199701] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:35.441 [2024-12-10 21:45:36.201696] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:35.441 [2024-12-10 21:45:36.201737] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:18:35.441 BaseBdev4 00:18:35.441 21:45:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.441 21:45:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:18:35.441 21:45:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.441 21:45:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:35.701 spare_malloc 00:18:35.701 21:45:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.701 21:45:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:18:35.701 21:45:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.701 21:45:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:35.701 spare_delay 00:18:35.701 21:45:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.701 21:45:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:35.701 21:45:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.701 21:45:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:35.701 [2024-12-10 21:45:36.262398] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:35.701 [2024-12-10 21:45:36.262476] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:35.701 [2024-12-10 21:45:36.262493] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:18:35.701 [2024-12-10 21:45:36.262502] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:35.701 [2024-12-10 21:45:36.264493] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:35.701 [2024-12-10 21:45:36.264532] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:35.701 spare 00:18:35.701 21:45:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.701 21:45:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:18:35.701 21:45:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.701 21:45:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:35.701 [2024-12-10 21:45:36.274448] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:35.701 [2024-12-10 21:45:36.276185] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:35.701 [2024-12-10 21:45:36.276248] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:35.701 [2024-12-10 21:45:36.276297] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:35.701 [2024-12-10 21:45:36.276498] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:35.701 [2024-12-10 21:45:36.276513] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:18:35.701 [2024-12-10 21:45:36.276775] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:18:35.701 [2024-12-10 21:45:36.284138] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:35.701 [2024-12-10 21:45:36.284161] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:35.701 [2024-12-10 21:45:36.284354] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:35.701 21:45:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.701 21:45:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:18:35.701 21:45:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:35.701 21:45:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:35.701 21:45:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:35.701 21:45:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:35.701 21:45:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:35.701 21:45:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:35.701 21:45:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:35.701 21:45:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:35.701 21:45:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:35.701 21:45:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:35.701 21:45:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:35.701 21:45:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.701 21:45:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:35.701 21:45:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.701 21:45:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:35.701 "name": "raid_bdev1", 00:18:35.701 "uuid": "5e7d55c6-2922-4bdc-a690-ed091b9c7943", 00:18:35.701 "strip_size_kb": 64, 00:18:35.701 "state": "online", 00:18:35.701 "raid_level": "raid5f", 00:18:35.701 "superblock": true, 00:18:35.701 "num_base_bdevs": 4, 00:18:35.701 "num_base_bdevs_discovered": 4, 00:18:35.701 "num_base_bdevs_operational": 4, 00:18:35.701 "base_bdevs_list": [ 00:18:35.701 { 00:18:35.701 "name": "BaseBdev1", 00:18:35.701 "uuid": "a4e90fa2-fcb7-5268-a29f-2e3016b36f23", 00:18:35.701 "is_configured": true, 00:18:35.701 "data_offset": 2048, 00:18:35.701 "data_size": 63488 00:18:35.701 }, 00:18:35.701 { 00:18:35.701 "name": "BaseBdev2", 00:18:35.701 "uuid": "193666aa-bfd0-5ea4-a3c4-d3dfff993717", 00:18:35.701 "is_configured": true, 00:18:35.701 "data_offset": 2048, 00:18:35.701 "data_size": 63488 00:18:35.701 }, 00:18:35.701 { 00:18:35.701 "name": "BaseBdev3", 00:18:35.701 "uuid": "880a0627-ad2c-59f0-8bab-2ead3c6acb00", 00:18:35.701 "is_configured": true, 00:18:35.701 "data_offset": 2048, 00:18:35.701 "data_size": 63488 00:18:35.701 }, 00:18:35.701 { 00:18:35.701 "name": "BaseBdev4", 00:18:35.701 "uuid": "91d71167-4a07-5447-a21e-05017b731016", 00:18:35.701 "is_configured": true, 00:18:35.701 "data_offset": 2048, 00:18:35.701 "data_size": 63488 00:18:35.701 } 00:18:35.701 ] 00:18:35.701 }' 00:18:35.701 21:45:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:35.701 21:45:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:35.961 21:45:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:18:35.961 21:45:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:35.961 21:45:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.961 21:45:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:35.961 [2024-12-10 21:45:36.724275] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:36.222 21:45:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.222 21:45:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=190464 00:18:36.222 21:45:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:18:36.222 21:45:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:36.222 21:45:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.222 21:45:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:36.222 21:45:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.222 21:45:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:18:36.222 21:45:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:18:36.222 21:45:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:18:36.222 21:45:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:18:36.222 21:45:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:18:36.222 21:45:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:36.222 21:45:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:18:36.222 21:45:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:36.222 21:45:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:18:36.222 21:45:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:36.222 21:45:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:18:36.222 21:45:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:36.222 21:45:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:36.222 21:45:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:18:36.222 [2024-12-10 21:45:36.959705] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:18:36.222 /dev/nbd0 00:18:36.222 21:45:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:36.222 21:45:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:36.222 21:45:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:18:36.222 21:45:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:18:36.222 21:45:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:36.222 21:45:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:36.222 21:45:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:18:36.482 21:45:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:18:36.482 21:45:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:36.482 21:45:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:36.482 21:45:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:36.482 1+0 records in 00:18:36.482 1+0 records out 00:18:36.482 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000215792 s, 19.0 MB/s 00:18:36.482 21:45:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:36.482 21:45:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:18:36.482 21:45:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:36.482 21:45:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:36.482 21:45:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:18:36.482 21:45:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:36.482 21:45:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:36.482 21:45:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:18:36.482 21:45:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:18:36.482 21:45:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 192 00:18:36.482 21:45:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=496 oflag=direct 00:18:36.740 496+0 records in 00:18:36.740 496+0 records out 00:18:36.740 97517568 bytes (98 MB, 93 MiB) copied, 0.447353 s, 218 MB/s 00:18:36.740 21:45:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:18:36.740 21:45:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:36.740 21:45:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:18:36.740 21:45:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:36.740 21:45:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:18:36.740 21:45:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:36.740 21:45:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:36.999 21:45:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:36.999 [2024-12-10 21:45:37.687247] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:36.999 21:45:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:36.999 21:45:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:36.999 21:45:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:36.999 21:45:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:36.999 21:45:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:36.999 21:45:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:18:36.999 21:45:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:18:36.999 21:45:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:18:36.999 21:45:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.999 21:45:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:36.999 [2024-12-10 21:45:37.705515] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:36.999 21:45:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.999 21:45:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:18:36.999 21:45:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:36.999 21:45:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:36.999 21:45:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:36.999 21:45:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:36.999 21:45:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:36.999 21:45:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:36.999 21:45:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:36.999 21:45:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:36.999 21:45:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:36.999 21:45:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:36.999 21:45:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:36.999 21:45:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.999 21:45:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:37.000 21:45:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.000 21:45:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:37.000 "name": "raid_bdev1", 00:18:37.000 "uuid": "5e7d55c6-2922-4bdc-a690-ed091b9c7943", 00:18:37.000 "strip_size_kb": 64, 00:18:37.000 "state": "online", 00:18:37.000 "raid_level": "raid5f", 00:18:37.000 "superblock": true, 00:18:37.000 "num_base_bdevs": 4, 00:18:37.000 "num_base_bdevs_discovered": 3, 00:18:37.000 "num_base_bdevs_operational": 3, 00:18:37.000 "base_bdevs_list": [ 00:18:37.000 { 00:18:37.000 "name": null, 00:18:37.000 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:37.000 "is_configured": false, 00:18:37.000 "data_offset": 0, 00:18:37.000 "data_size": 63488 00:18:37.000 }, 00:18:37.000 { 00:18:37.000 "name": "BaseBdev2", 00:18:37.000 "uuid": "193666aa-bfd0-5ea4-a3c4-d3dfff993717", 00:18:37.000 "is_configured": true, 00:18:37.000 "data_offset": 2048, 00:18:37.000 "data_size": 63488 00:18:37.000 }, 00:18:37.000 { 00:18:37.000 "name": "BaseBdev3", 00:18:37.000 "uuid": "880a0627-ad2c-59f0-8bab-2ead3c6acb00", 00:18:37.000 "is_configured": true, 00:18:37.000 "data_offset": 2048, 00:18:37.000 "data_size": 63488 00:18:37.000 }, 00:18:37.000 { 00:18:37.000 "name": "BaseBdev4", 00:18:37.000 "uuid": "91d71167-4a07-5447-a21e-05017b731016", 00:18:37.000 "is_configured": true, 00:18:37.000 "data_offset": 2048, 00:18:37.000 "data_size": 63488 00:18:37.000 } 00:18:37.000 ] 00:18:37.000 }' 00:18:37.000 21:45:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:37.000 21:45:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:37.569 21:45:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:37.569 21:45:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.569 21:45:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:37.569 [2024-12-10 21:45:38.108800] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:37.569 [2024-12-10 21:45:38.124844] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002aa50 00:18:37.569 21:45:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.569 21:45:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:18:37.569 [2024-12-10 21:45:38.134738] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:38.508 21:45:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:38.508 21:45:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:38.508 21:45:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:38.508 21:45:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:38.508 21:45:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:38.508 21:45:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:38.508 21:45:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.508 21:45:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:38.508 21:45:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:38.508 21:45:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.508 21:45:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:38.508 "name": "raid_bdev1", 00:18:38.508 "uuid": "5e7d55c6-2922-4bdc-a690-ed091b9c7943", 00:18:38.508 "strip_size_kb": 64, 00:18:38.508 "state": "online", 00:18:38.508 "raid_level": "raid5f", 00:18:38.508 "superblock": true, 00:18:38.508 "num_base_bdevs": 4, 00:18:38.508 "num_base_bdevs_discovered": 4, 00:18:38.508 "num_base_bdevs_operational": 4, 00:18:38.508 "process": { 00:18:38.508 "type": "rebuild", 00:18:38.508 "target": "spare", 00:18:38.508 "progress": { 00:18:38.508 "blocks": 19200, 00:18:38.508 "percent": 10 00:18:38.508 } 00:18:38.508 }, 00:18:38.508 "base_bdevs_list": [ 00:18:38.508 { 00:18:38.508 "name": "spare", 00:18:38.508 "uuid": "8ce8d9aa-88da-57dd-936e-4d122368f191", 00:18:38.508 "is_configured": true, 00:18:38.508 "data_offset": 2048, 00:18:38.508 "data_size": 63488 00:18:38.508 }, 00:18:38.508 { 00:18:38.508 "name": "BaseBdev2", 00:18:38.508 "uuid": "193666aa-bfd0-5ea4-a3c4-d3dfff993717", 00:18:38.508 "is_configured": true, 00:18:38.508 "data_offset": 2048, 00:18:38.508 "data_size": 63488 00:18:38.508 }, 00:18:38.508 { 00:18:38.508 "name": "BaseBdev3", 00:18:38.508 "uuid": "880a0627-ad2c-59f0-8bab-2ead3c6acb00", 00:18:38.508 "is_configured": true, 00:18:38.508 "data_offset": 2048, 00:18:38.508 "data_size": 63488 00:18:38.508 }, 00:18:38.508 { 00:18:38.508 "name": "BaseBdev4", 00:18:38.508 "uuid": "91d71167-4a07-5447-a21e-05017b731016", 00:18:38.508 "is_configured": true, 00:18:38.508 "data_offset": 2048, 00:18:38.508 "data_size": 63488 00:18:38.508 } 00:18:38.508 ] 00:18:38.508 }' 00:18:38.508 21:45:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:38.508 21:45:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:38.508 21:45:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:38.508 21:45:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:38.508 21:45:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:38.508 21:45:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.508 21:45:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:38.508 [2024-12-10 21:45:39.285470] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:38.768 [2024-12-10 21:45:39.341514] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:38.768 [2024-12-10 21:45:39.341608] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:38.768 [2024-12-10 21:45:39.341627] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:38.768 [2024-12-10 21:45:39.341638] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:38.768 21:45:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.768 21:45:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:18:38.768 21:45:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:38.768 21:45:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:38.768 21:45:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:38.768 21:45:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:38.768 21:45:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:38.768 21:45:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:38.768 21:45:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:38.768 21:45:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:38.768 21:45:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:38.768 21:45:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:38.768 21:45:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.768 21:45:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:38.768 21:45:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:38.768 21:45:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.768 21:45:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:38.768 "name": "raid_bdev1", 00:18:38.768 "uuid": "5e7d55c6-2922-4bdc-a690-ed091b9c7943", 00:18:38.768 "strip_size_kb": 64, 00:18:38.768 "state": "online", 00:18:38.768 "raid_level": "raid5f", 00:18:38.768 "superblock": true, 00:18:38.768 "num_base_bdevs": 4, 00:18:38.768 "num_base_bdevs_discovered": 3, 00:18:38.768 "num_base_bdevs_operational": 3, 00:18:38.768 "base_bdevs_list": [ 00:18:38.768 { 00:18:38.768 "name": null, 00:18:38.768 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:38.768 "is_configured": false, 00:18:38.768 "data_offset": 0, 00:18:38.768 "data_size": 63488 00:18:38.768 }, 00:18:38.768 { 00:18:38.768 "name": "BaseBdev2", 00:18:38.768 "uuid": "193666aa-bfd0-5ea4-a3c4-d3dfff993717", 00:18:38.768 "is_configured": true, 00:18:38.768 "data_offset": 2048, 00:18:38.768 "data_size": 63488 00:18:38.768 }, 00:18:38.768 { 00:18:38.768 "name": "BaseBdev3", 00:18:38.768 "uuid": "880a0627-ad2c-59f0-8bab-2ead3c6acb00", 00:18:38.768 "is_configured": true, 00:18:38.768 "data_offset": 2048, 00:18:38.768 "data_size": 63488 00:18:38.768 }, 00:18:38.768 { 00:18:38.768 "name": "BaseBdev4", 00:18:38.768 "uuid": "91d71167-4a07-5447-a21e-05017b731016", 00:18:38.768 "is_configured": true, 00:18:38.768 "data_offset": 2048, 00:18:38.768 "data_size": 63488 00:18:38.768 } 00:18:38.768 ] 00:18:38.768 }' 00:18:38.768 21:45:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:38.768 21:45:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:39.027 21:45:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:39.027 21:45:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:39.028 21:45:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:39.028 21:45:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:39.028 21:45:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:39.028 21:45:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:39.028 21:45:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.028 21:45:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:39.028 21:45:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:39.286 21:45:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.286 21:45:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:39.286 "name": "raid_bdev1", 00:18:39.286 "uuid": "5e7d55c6-2922-4bdc-a690-ed091b9c7943", 00:18:39.286 "strip_size_kb": 64, 00:18:39.286 "state": "online", 00:18:39.286 "raid_level": "raid5f", 00:18:39.286 "superblock": true, 00:18:39.286 "num_base_bdevs": 4, 00:18:39.286 "num_base_bdevs_discovered": 3, 00:18:39.286 "num_base_bdevs_operational": 3, 00:18:39.286 "base_bdevs_list": [ 00:18:39.286 { 00:18:39.286 "name": null, 00:18:39.286 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:39.286 "is_configured": false, 00:18:39.286 "data_offset": 0, 00:18:39.286 "data_size": 63488 00:18:39.286 }, 00:18:39.286 { 00:18:39.286 "name": "BaseBdev2", 00:18:39.286 "uuid": "193666aa-bfd0-5ea4-a3c4-d3dfff993717", 00:18:39.286 "is_configured": true, 00:18:39.286 "data_offset": 2048, 00:18:39.286 "data_size": 63488 00:18:39.286 }, 00:18:39.286 { 00:18:39.286 "name": "BaseBdev3", 00:18:39.286 "uuid": "880a0627-ad2c-59f0-8bab-2ead3c6acb00", 00:18:39.286 "is_configured": true, 00:18:39.286 "data_offset": 2048, 00:18:39.286 "data_size": 63488 00:18:39.286 }, 00:18:39.286 { 00:18:39.286 "name": "BaseBdev4", 00:18:39.286 "uuid": "91d71167-4a07-5447-a21e-05017b731016", 00:18:39.286 "is_configured": true, 00:18:39.286 "data_offset": 2048, 00:18:39.286 "data_size": 63488 00:18:39.286 } 00:18:39.286 ] 00:18:39.286 }' 00:18:39.286 21:45:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:39.286 21:45:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:39.286 21:45:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:39.286 21:45:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:39.286 21:45:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:39.286 21:45:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.286 21:45:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:39.286 [2024-12-10 21:45:39.908868] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:39.286 [2024-12-10 21:45:39.923936] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002ab20 00:18:39.286 21:45:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.286 21:45:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:18:39.286 [2024-12-10 21:45:39.933871] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:40.224 21:45:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:40.224 21:45:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:40.224 21:45:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:40.224 21:45:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:40.224 21:45:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:40.224 21:45:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:40.224 21:45:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:40.224 21:45:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.224 21:45:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:40.224 21:45:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.224 21:45:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:40.224 "name": "raid_bdev1", 00:18:40.224 "uuid": "5e7d55c6-2922-4bdc-a690-ed091b9c7943", 00:18:40.224 "strip_size_kb": 64, 00:18:40.224 "state": "online", 00:18:40.224 "raid_level": "raid5f", 00:18:40.224 "superblock": true, 00:18:40.224 "num_base_bdevs": 4, 00:18:40.224 "num_base_bdevs_discovered": 4, 00:18:40.224 "num_base_bdevs_operational": 4, 00:18:40.224 "process": { 00:18:40.224 "type": "rebuild", 00:18:40.224 "target": "spare", 00:18:40.224 "progress": { 00:18:40.224 "blocks": 19200, 00:18:40.224 "percent": 10 00:18:40.224 } 00:18:40.224 }, 00:18:40.224 "base_bdevs_list": [ 00:18:40.224 { 00:18:40.224 "name": "spare", 00:18:40.224 "uuid": "8ce8d9aa-88da-57dd-936e-4d122368f191", 00:18:40.224 "is_configured": true, 00:18:40.224 "data_offset": 2048, 00:18:40.224 "data_size": 63488 00:18:40.224 }, 00:18:40.224 { 00:18:40.224 "name": "BaseBdev2", 00:18:40.224 "uuid": "193666aa-bfd0-5ea4-a3c4-d3dfff993717", 00:18:40.224 "is_configured": true, 00:18:40.224 "data_offset": 2048, 00:18:40.224 "data_size": 63488 00:18:40.224 }, 00:18:40.224 { 00:18:40.224 "name": "BaseBdev3", 00:18:40.224 "uuid": "880a0627-ad2c-59f0-8bab-2ead3c6acb00", 00:18:40.224 "is_configured": true, 00:18:40.224 "data_offset": 2048, 00:18:40.224 "data_size": 63488 00:18:40.224 }, 00:18:40.224 { 00:18:40.224 "name": "BaseBdev4", 00:18:40.224 "uuid": "91d71167-4a07-5447-a21e-05017b731016", 00:18:40.224 "is_configured": true, 00:18:40.224 "data_offset": 2048, 00:18:40.224 "data_size": 63488 00:18:40.224 } 00:18:40.224 ] 00:18:40.224 }' 00:18:40.224 21:45:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:40.484 21:45:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:40.484 21:45:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:40.484 21:45:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:40.484 21:45:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:18:40.484 21:45:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:18:40.484 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:18:40.484 21:45:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:18:40.484 21:45:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:18:40.484 21:45:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=653 00:18:40.484 21:45:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:40.484 21:45:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:40.484 21:45:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:40.484 21:45:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:40.484 21:45:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:40.484 21:45:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:40.484 21:45:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:40.484 21:45:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.484 21:45:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:40.484 21:45:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:40.484 21:45:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.484 21:45:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:40.484 "name": "raid_bdev1", 00:18:40.484 "uuid": "5e7d55c6-2922-4bdc-a690-ed091b9c7943", 00:18:40.484 "strip_size_kb": 64, 00:18:40.484 "state": "online", 00:18:40.484 "raid_level": "raid5f", 00:18:40.484 "superblock": true, 00:18:40.484 "num_base_bdevs": 4, 00:18:40.484 "num_base_bdevs_discovered": 4, 00:18:40.484 "num_base_bdevs_operational": 4, 00:18:40.484 "process": { 00:18:40.484 "type": "rebuild", 00:18:40.484 "target": "spare", 00:18:40.484 "progress": { 00:18:40.484 "blocks": 21120, 00:18:40.484 "percent": 11 00:18:40.484 } 00:18:40.484 }, 00:18:40.484 "base_bdevs_list": [ 00:18:40.484 { 00:18:40.484 "name": "spare", 00:18:40.484 "uuid": "8ce8d9aa-88da-57dd-936e-4d122368f191", 00:18:40.484 "is_configured": true, 00:18:40.484 "data_offset": 2048, 00:18:40.484 "data_size": 63488 00:18:40.484 }, 00:18:40.484 { 00:18:40.484 "name": "BaseBdev2", 00:18:40.484 "uuid": "193666aa-bfd0-5ea4-a3c4-d3dfff993717", 00:18:40.484 "is_configured": true, 00:18:40.484 "data_offset": 2048, 00:18:40.484 "data_size": 63488 00:18:40.484 }, 00:18:40.484 { 00:18:40.484 "name": "BaseBdev3", 00:18:40.484 "uuid": "880a0627-ad2c-59f0-8bab-2ead3c6acb00", 00:18:40.484 "is_configured": true, 00:18:40.484 "data_offset": 2048, 00:18:40.484 "data_size": 63488 00:18:40.484 }, 00:18:40.484 { 00:18:40.484 "name": "BaseBdev4", 00:18:40.484 "uuid": "91d71167-4a07-5447-a21e-05017b731016", 00:18:40.484 "is_configured": true, 00:18:40.484 "data_offset": 2048, 00:18:40.484 "data_size": 63488 00:18:40.484 } 00:18:40.484 ] 00:18:40.484 }' 00:18:40.484 21:45:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:40.484 21:45:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:40.484 21:45:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:40.484 21:45:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:40.484 21:45:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:41.423 21:45:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:41.423 21:45:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:41.423 21:45:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:41.423 21:45:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:41.423 21:45:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:41.423 21:45:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:41.423 21:45:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:41.423 21:45:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.423 21:45:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:41.423 21:45:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:41.423 21:45:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.683 21:45:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:41.683 "name": "raid_bdev1", 00:18:41.683 "uuid": "5e7d55c6-2922-4bdc-a690-ed091b9c7943", 00:18:41.683 "strip_size_kb": 64, 00:18:41.683 "state": "online", 00:18:41.683 "raid_level": "raid5f", 00:18:41.683 "superblock": true, 00:18:41.683 "num_base_bdevs": 4, 00:18:41.683 "num_base_bdevs_discovered": 4, 00:18:41.683 "num_base_bdevs_operational": 4, 00:18:41.683 "process": { 00:18:41.683 "type": "rebuild", 00:18:41.683 "target": "spare", 00:18:41.683 "progress": { 00:18:41.683 "blocks": 42240, 00:18:41.683 "percent": 22 00:18:41.683 } 00:18:41.683 }, 00:18:41.683 "base_bdevs_list": [ 00:18:41.683 { 00:18:41.683 "name": "spare", 00:18:41.683 "uuid": "8ce8d9aa-88da-57dd-936e-4d122368f191", 00:18:41.683 "is_configured": true, 00:18:41.683 "data_offset": 2048, 00:18:41.683 "data_size": 63488 00:18:41.683 }, 00:18:41.683 { 00:18:41.683 "name": "BaseBdev2", 00:18:41.683 "uuid": "193666aa-bfd0-5ea4-a3c4-d3dfff993717", 00:18:41.683 "is_configured": true, 00:18:41.683 "data_offset": 2048, 00:18:41.683 "data_size": 63488 00:18:41.683 }, 00:18:41.683 { 00:18:41.683 "name": "BaseBdev3", 00:18:41.683 "uuid": "880a0627-ad2c-59f0-8bab-2ead3c6acb00", 00:18:41.683 "is_configured": true, 00:18:41.683 "data_offset": 2048, 00:18:41.683 "data_size": 63488 00:18:41.683 }, 00:18:41.683 { 00:18:41.683 "name": "BaseBdev4", 00:18:41.683 "uuid": "91d71167-4a07-5447-a21e-05017b731016", 00:18:41.683 "is_configured": true, 00:18:41.683 "data_offset": 2048, 00:18:41.683 "data_size": 63488 00:18:41.683 } 00:18:41.683 ] 00:18:41.683 }' 00:18:41.683 21:45:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:41.683 21:45:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:41.683 21:45:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:41.683 21:45:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:41.683 21:45:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:42.622 21:45:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:42.622 21:45:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:42.622 21:45:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:42.622 21:45:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:42.622 21:45:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:42.622 21:45:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:42.622 21:45:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:42.622 21:45:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.622 21:45:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:42.622 21:45:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:42.622 21:45:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.622 21:45:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:42.622 "name": "raid_bdev1", 00:18:42.622 "uuid": "5e7d55c6-2922-4bdc-a690-ed091b9c7943", 00:18:42.622 "strip_size_kb": 64, 00:18:42.622 "state": "online", 00:18:42.622 "raid_level": "raid5f", 00:18:42.622 "superblock": true, 00:18:42.622 "num_base_bdevs": 4, 00:18:42.622 "num_base_bdevs_discovered": 4, 00:18:42.622 "num_base_bdevs_operational": 4, 00:18:42.622 "process": { 00:18:42.622 "type": "rebuild", 00:18:42.622 "target": "spare", 00:18:42.622 "progress": { 00:18:42.622 "blocks": 63360, 00:18:42.622 "percent": 33 00:18:42.622 } 00:18:42.622 }, 00:18:42.622 "base_bdevs_list": [ 00:18:42.622 { 00:18:42.622 "name": "spare", 00:18:42.622 "uuid": "8ce8d9aa-88da-57dd-936e-4d122368f191", 00:18:42.622 "is_configured": true, 00:18:42.622 "data_offset": 2048, 00:18:42.622 "data_size": 63488 00:18:42.622 }, 00:18:42.622 { 00:18:42.622 "name": "BaseBdev2", 00:18:42.622 "uuid": "193666aa-bfd0-5ea4-a3c4-d3dfff993717", 00:18:42.622 "is_configured": true, 00:18:42.622 "data_offset": 2048, 00:18:42.622 "data_size": 63488 00:18:42.622 }, 00:18:42.622 { 00:18:42.622 "name": "BaseBdev3", 00:18:42.622 "uuid": "880a0627-ad2c-59f0-8bab-2ead3c6acb00", 00:18:42.622 "is_configured": true, 00:18:42.622 "data_offset": 2048, 00:18:42.622 "data_size": 63488 00:18:42.622 }, 00:18:42.622 { 00:18:42.622 "name": "BaseBdev4", 00:18:42.622 "uuid": "91d71167-4a07-5447-a21e-05017b731016", 00:18:42.622 "is_configured": true, 00:18:42.622 "data_offset": 2048, 00:18:42.622 "data_size": 63488 00:18:42.622 } 00:18:42.622 ] 00:18:42.622 }' 00:18:42.622 21:45:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:42.882 21:45:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:42.882 21:45:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:42.882 21:45:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:42.882 21:45:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:43.820 21:45:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:43.820 21:45:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:43.820 21:45:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:43.820 21:45:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:43.820 21:45:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:43.820 21:45:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:43.820 21:45:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:43.820 21:45:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:43.820 21:45:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.820 21:45:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:43.820 21:45:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.820 21:45:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:43.820 "name": "raid_bdev1", 00:18:43.820 "uuid": "5e7d55c6-2922-4bdc-a690-ed091b9c7943", 00:18:43.820 "strip_size_kb": 64, 00:18:43.820 "state": "online", 00:18:43.820 "raid_level": "raid5f", 00:18:43.820 "superblock": true, 00:18:43.820 "num_base_bdevs": 4, 00:18:43.820 "num_base_bdevs_discovered": 4, 00:18:43.820 "num_base_bdevs_operational": 4, 00:18:43.820 "process": { 00:18:43.820 "type": "rebuild", 00:18:43.820 "target": "spare", 00:18:43.820 "progress": { 00:18:43.820 "blocks": 86400, 00:18:43.820 "percent": 45 00:18:43.820 } 00:18:43.820 }, 00:18:43.820 "base_bdevs_list": [ 00:18:43.820 { 00:18:43.820 "name": "spare", 00:18:43.820 "uuid": "8ce8d9aa-88da-57dd-936e-4d122368f191", 00:18:43.820 "is_configured": true, 00:18:43.820 "data_offset": 2048, 00:18:43.820 "data_size": 63488 00:18:43.820 }, 00:18:43.820 { 00:18:43.820 "name": "BaseBdev2", 00:18:43.820 "uuid": "193666aa-bfd0-5ea4-a3c4-d3dfff993717", 00:18:43.820 "is_configured": true, 00:18:43.820 "data_offset": 2048, 00:18:43.820 "data_size": 63488 00:18:43.820 }, 00:18:43.820 { 00:18:43.820 "name": "BaseBdev3", 00:18:43.820 "uuid": "880a0627-ad2c-59f0-8bab-2ead3c6acb00", 00:18:43.820 "is_configured": true, 00:18:43.820 "data_offset": 2048, 00:18:43.820 "data_size": 63488 00:18:43.820 }, 00:18:43.820 { 00:18:43.820 "name": "BaseBdev4", 00:18:43.820 "uuid": "91d71167-4a07-5447-a21e-05017b731016", 00:18:43.820 "is_configured": true, 00:18:43.820 "data_offset": 2048, 00:18:43.820 "data_size": 63488 00:18:43.820 } 00:18:43.820 ] 00:18:43.820 }' 00:18:43.820 21:45:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:43.820 21:45:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:43.820 21:45:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:44.080 21:45:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:44.080 21:45:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:45.027 21:45:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:45.027 21:45:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:45.027 21:45:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:45.027 21:45:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:45.027 21:45:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:45.027 21:45:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:45.027 21:45:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:45.027 21:45:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.027 21:45:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:45.027 21:45:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:45.027 21:45:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.027 21:45:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:45.027 "name": "raid_bdev1", 00:18:45.027 "uuid": "5e7d55c6-2922-4bdc-a690-ed091b9c7943", 00:18:45.027 "strip_size_kb": 64, 00:18:45.027 "state": "online", 00:18:45.027 "raid_level": "raid5f", 00:18:45.027 "superblock": true, 00:18:45.027 "num_base_bdevs": 4, 00:18:45.027 "num_base_bdevs_discovered": 4, 00:18:45.027 "num_base_bdevs_operational": 4, 00:18:45.027 "process": { 00:18:45.027 "type": "rebuild", 00:18:45.027 "target": "spare", 00:18:45.027 "progress": { 00:18:45.027 "blocks": 107520, 00:18:45.027 "percent": 56 00:18:45.027 } 00:18:45.027 }, 00:18:45.027 "base_bdevs_list": [ 00:18:45.027 { 00:18:45.027 "name": "spare", 00:18:45.027 "uuid": "8ce8d9aa-88da-57dd-936e-4d122368f191", 00:18:45.027 "is_configured": true, 00:18:45.027 "data_offset": 2048, 00:18:45.027 "data_size": 63488 00:18:45.027 }, 00:18:45.027 { 00:18:45.027 "name": "BaseBdev2", 00:18:45.027 "uuid": "193666aa-bfd0-5ea4-a3c4-d3dfff993717", 00:18:45.027 "is_configured": true, 00:18:45.027 "data_offset": 2048, 00:18:45.027 "data_size": 63488 00:18:45.027 }, 00:18:45.027 { 00:18:45.027 "name": "BaseBdev3", 00:18:45.027 "uuid": "880a0627-ad2c-59f0-8bab-2ead3c6acb00", 00:18:45.027 "is_configured": true, 00:18:45.027 "data_offset": 2048, 00:18:45.027 "data_size": 63488 00:18:45.027 }, 00:18:45.027 { 00:18:45.027 "name": "BaseBdev4", 00:18:45.027 "uuid": "91d71167-4a07-5447-a21e-05017b731016", 00:18:45.027 "is_configured": true, 00:18:45.027 "data_offset": 2048, 00:18:45.027 "data_size": 63488 00:18:45.027 } 00:18:45.027 ] 00:18:45.027 }' 00:18:45.027 21:45:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:45.027 21:45:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:45.027 21:45:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:45.027 21:45:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:45.027 21:45:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:46.409 21:45:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:46.409 21:45:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:46.409 21:45:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:46.409 21:45:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:46.409 21:45:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:46.409 21:45:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:46.409 21:45:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:46.409 21:45:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:46.409 21:45:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.409 21:45:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:46.409 21:45:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.409 21:45:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:46.409 "name": "raid_bdev1", 00:18:46.409 "uuid": "5e7d55c6-2922-4bdc-a690-ed091b9c7943", 00:18:46.409 "strip_size_kb": 64, 00:18:46.409 "state": "online", 00:18:46.409 "raid_level": "raid5f", 00:18:46.409 "superblock": true, 00:18:46.409 "num_base_bdevs": 4, 00:18:46.409 "num_base_bdevs_discovered": 4, 00:18:46.409 "num_base_bdevs_operational": 4, 00:18:46.409 "process": { 00:18:46.409 "type": "rebuild", 00:18:46.409 "target": "spare", 00:18:46.409 "progress": { 00:18:46.409 "blocks": 130560, 00:18:46.409 "percent": 68 00:18:46.409 } 00:18:46.409 }, 00:18:46.409 "base_bdevs_list": [ 00:18:46.409 { 00:18:46.409 "name": "spare", 00:18:46.409 "uuid": "8ce8d9aa-88da-57dd-936e-4d122368f191", 00:18:46.409 "is_configured": true, 00:18:46.409 "data_offset": 2048, 00:18:46.409 "data_size": 63488 00:18:46.409 }, 00:18:46.409 { 00:18:46.409 "name": "BaseBdev2", 00:18:46.409 "uuid": "193666aa-bfd0-5ea4-a3c4-d3dfff993717", 00:18:46.409 "is_configured": true, 00:18:46.409 "data_offset": 2048, 00:18:46.409 "data_size": 63488 00:18:46.409 }, 00:18:46.409 { 00:18:46.409 "name": "BaseBdev3", 00:18:46.409 "uuid": "880a0627-ad2c-59f0-8bab-2ead3c6acb00", 00:18:46.409 "is_configured": true, 00:18:46.409 "data_offset": 2048, 00:18:46.409 "data_size": 63488 00:18:46.409 }, 00:18:46.409 { 00:18:46.409 "name": "BaseBdev4", 00:18:46.409 "uuid": "91d71167-4a07-5447-a21e-05017b731016", 00:18:46.409 "is_configured": true, 00:18:46.409 "data_offset": 2048, 00:18:46.409 "data_size": 63488 00:18:46.409 } 00:18:46.409 ] 00:18:46.409 }' 00:18:46.409 21:45:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:46.409 21:45:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:46.409 21:45:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:46.409 21:45:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:46.409 21:45:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:47.349 21:45:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:47.349 21:45:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:47.349 21:45:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:47.349 21:45:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:47.349 21:45:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:47.349 21:45:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:47.349 21:45:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:47.349 21:45:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:47.349 21:45:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.349 21:45:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:47.349 21:45:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.349 21:45:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:47.349 "name": "raid_bdev1", 00:18:47.349 "uuid": "5e7d55c6-2922-4bdc-a690-ed091b9c7943", 00:18:47.349 "strip_size_kb": 64, 00:18:47.349 "state": "online", 00:18:47.349 "raid_level": "raid5f", 00:18:47.349 "superblock": true, 00:18:47.349 "num_base_bdevs": 4, 00:18:47.349 "num_base_bdevs_discovered": 4, 00:18:47.349 "num_base_bdevs_operational": 4, 00:18:47.349 "process": { 00:18:47.349 "type": "rebuild", 00:18:47.349 "target": "spare", 00:18:47.349 "progress": { 00:18:47.349 "blocks": 151680, 00:18:47.349 "percent": 79 00:18:47.350 } 00:18:47.350 }, 00:18:47.350 "base_bdevs_list": [ 00:18:47.350 { 00:18:47.350 "name": "spare", 00:18:47.350 "uuid": "8ce8d9aa-88da-57dd-936e-4d122368f191", 00:18:47.350 "is_configured": true, 00:18:47.350 "data_offset": 2048, 00:18:47.350 "data_size": 63488 00:18:47.350 }, 00:18:47.350 { 00:18:47.350 "name": "BaseBdev2", 00:18:47.350 "uuid": "193666aa-bfd0-5ea4-a3c4-d3dfff993717", 00:18:47.350 "is_configured": true, 00:18:47.350 "data_offset": 2048, 00:18:47.350 "data_size": 63488 00:18:47.350 }, 00:18:47.350 { 00:18:47.350 "name": "BaseBdev3", 00:18:47.350 "uuid": "880a0627-ad2c-59f0-8bab-2ead3c6acb00", 00:18:47.350 "is_configured": true, 00:18:47.350 "data_offset": 2048, 00:18:47.350 "data_size": 63488 00:18:47.350 }, 00:18:47.350 { 00:18:47.350 "name": "BaseBdev4", 00:18:47.350 "uuid": "91d71167-4a07-5447-a21e-05017b731016", 00:18:47.350 "is_configured": true, 00:18:47.350 "data_offset": 2048, 00:18:47.350 "data_size": 63488 00:18:47.350 } 00:18:47.350 ] 00:18:47.350 }' 00:18:47.350 21:45:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:47.350 21:45:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:47.350 21:45:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:47.350 21:45:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:47.350 21:45:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:48.728 21:45:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:48.728 21:45:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:48.728 21:45:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:48.728 21:45:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:48.728 21:45:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:48.728 21:45:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:48.728 21:45:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:48.728 21:45:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.728 21:45:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:48.728 21:45:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:48.728 21:45:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.728 21:45:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:48.728 "name": "raid_bdev1", 00:18:48.728 "uuid": "5e7d55c6-2922-4bdc-a690-ed091b9c7943", 00:18:48.728 "strip_size_kb": 64, 00:18:48.728 "state": "online", 00:18:48.728 "raid_level": "raid5f", 00:18:48.728 "superblock": true, 00:18:48.728 "num_base_bdevs": 4, 00:18:48.728 "num_base_bdevs_discovered": 4, 00:18:48.728 "num_base_bdevs_operational": 4, 00:18:48.728 "process": { 00:18:48.728 "type": "rebuild", 00:18:48.728 "target": "spare", 00:18:48.728 "progress": { 00:18:48.728 "blocks": 174720, 00:18:48.728 "percent": 91 00:18:48.728 } 00:18:48.728 }, 00:18:48.728 "base_bdevs_list": [ 00:18:48.728 { 00:18:48.728 "name": "spare", 00:18:48.728 "uuid": "8ce8d9aa-88da-57dd-936e-4d122368f191", 00:18:48.728 "is_configured": true, 00:18:48.728 "data_offset": 2048, 00:18:48.728 "data_size": 63488 00:18:48.728 }, 00:18:48.728 { 00:18:48.728 "name": "BaseBdev2", 00:18:48.728 "uuid": "193666aa-bfd0-5ea4-a3c4-d3dfff993717", 00:18:48.728 "is_configured": true, 00:18:48.728 "data_offset": 2048, 00:18:48.728 "data_size": 63488 00:18:48.728 }, 00:18:48.728 { 00:18:48.728 "name": "BaseBdev3", 00:18:48.728 "uuid": "880a0627-ad2c-59f0-8bab-2ead3c6acb00", 00:18:48.728 "is_configured": true, 00:18:48.728 "data_offset": 2048, 00:18:48.728 "data_size": 63488 00:18:48.728 }, 00:18:48.728 { 00:18:48.728 "name": "BaseBdev4", 00:18:48.728 "uuid": "91d71167-4a07-5447-a21e-05017b731016", 00:18:48.728 "is_configured": true, 00:18:48.728 "data_offset": 2048, 00:18:48.728 "data_size": 63488 00:18:48.728 } 00:18:48.728 ] 00:18:48.728 }' 00:18:48.728 21:45:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:48.728 21:45:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:48.728 21:45:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:48.728 21:45:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:48.728 21:45:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:49.299 [2024-12-10 21:45:49.988420] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:18:49.299 [2024-12-10 21:45:49.988498] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:18:49.299 [2024-12-10 21:45:49.988633] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:49.561 21:45:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:49.561 21:45:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:49.561 21:45:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:49.561 21:45:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:49.561 21:45:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:49.561 21:45:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:49.561 21:45:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:49.561 21:45:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.561 21:45:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:49.561 21:45:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:49.561 21:45:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.562 21:45:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:49.562 "name": "raid_bdev1", 00:18:49.562 "uuid": "5e7d55c6-2922-4bdc-a690-ed091b9c7943", 00:18:49.562 "strip_size_kb": 64, 00:18:49.562 "state": "online", 00:18:49.562 "raid_level": "raid5f", 00:18:49.562 "superblock": true, 00:18:49.562 "num_base_bdevs": 4, 00:18:49.562 "num_base_bdevs_discovered": 4, 00:18:49.562 "num_base_bdevs_operational": 4, 00:18:49.562 "base_bdevs_list": [ 00:18:49.562 { 00:18:49.562 "name": "spare", 00:18:49.562 "uuid": "8ce8d9aa-88da-57dd-936e-4d122368f191", 00:18:49.562 "is_configured": true, 00:18:49.562 "data_offset": 2048, 00:18:49.562 "data_size": 63488 00:18:49.562 }, 00:18:49.562 { 00:18:49.562 "name": "BaseBdev2", 00:18:49.562 "uuid": "193666aa-bfd0-5ea4-a3c4-d3dfff993717", 00:18:49.562 "is_configured": true, 00:18:49.562 "data_offset": 2048, 00:18:49.562 "data_size": 63488 00:18:49.562 }, 00:18:49.562 { 00:18:49.562 "name": "BaseBdev3", 00:18:49.562 "uuid": "880a0627-ad2c-59f0-8bab-2ead3c6acb00", 00:18:49.562 "is_configured": true, 00:18:49.562 "data_offset": 2048, 00:18:49.562 "data_size": 63488 00:18:49.562 }, 00:18:49.562 { 00:18:49.562 "name": "BaseBdev4", 00:18:49.562 "uuid": "91d71167-4a07-5447-a21e-05017b731016", 00:18:49.562 "is_configured": true, 00:18:49.562 "data_offset": 2048, 00:18:49.562 "data_size": 63488 00:18:49.562 } 00:18:49.562 ] 00:18:49.562 }' 00:18:49.562 21:45:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:49.562 21:45:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:18:49.562 21:45:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:49.562 21:45:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:18:49.562 21:45:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:18:49.562 21:45:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:49.562 21:45:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:49.562 21:45:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:49.562 21:45:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:49.562 21:45:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:49.562 21:45:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:49.562 21:45:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:49.562 21:45:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.562 21:45:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:49.825 21:45:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.825 21:45:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:49.825 "name": "raid_bdev1", 00:18:49.825 "uuid": "5e7d55c6-2922-4bdc-a690-ed091b9c7943", 00:18:49.825 "strip_size_kb": 64, 00:18:49.825 "state": "online", 00:18:49.825 "raid_level": "raid5f", 00:18:49.825 "superblock": true, 00:18:49.825 "num_base_bdevs": 4, 00:18:49.825 "num_base_bdevs_discovered": 4, 00:18:49.825 "num_base_bdevs_operational": 4, 00:18:49.825 "base_bdevs_list": [ 00:18:49.825 { 00:18:49.825 "name": "spare", 00:18:49.825 "uuid": "8ce8d9aa-88da-57dd-936e-4d122368f191", 00:18:49.825 "is_configured": true, 00:18:49.825 "data_offset": 2048, 00:18:49.825 "data_size": 63488 00:18:49.825 }, 00:18:49.825 { 00:18:49.825 "name": "BaseBdev2", 00:18:49.825 "uuid": "193666aa-bfd0-5ea4-a3c4-d3dfff993717", 00:18:49.825 "is_configured": true, 00:18:49.825 "data_offset": 2048, 00:18:49.825 "data_size": 63488 00:18:49.825 }, 00:18:49.825 { 00:18:49.825 "name": "BaseBdev3", 00:18:49.825 "uuid": "880a0627-ad2c-59f0-8bab-2ead3c6acb00", 00:18:49.825 "is_configured": true, 00:18:49.825 "data_offset": 2048, 00:18:49.825 "data_size": 63488 00:18:49.825 }, 00:18:49.825 { 00:18:49.825 "name": "BaseBdev4", 00:18:49.825 "uuid": "91d71167-4a07-5447-a21e-05017b731016", 00:18:49.825 "is_configured": true, 00:18:49.825 "data_offset": 2048, 00:18:49.825 "data_size": 63488 00:18:49.825 } 00:18:49.825 ] 00:18:49.825 }' 00:18:49.825 21:45:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:49.825 21:45:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:49.825 21:45:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:49.825 21:45:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:49.825 21:45:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:18:49.825 21:45:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:49.825 21:45:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:49.825 21:45:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:49.825 21:45:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:49.825 21:45:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:49.825 21:45:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:49.825 21:45:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:49.825 21:45:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:49.825 21:45:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:49.825 21:45:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:49.825 21:45:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:49.825 21:45:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.825 21:45:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:49.825 21:45:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.825 21:45:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:49.825 "name": "raid_bdev1", 00:18:49.825 "uuid": "5e7d55c6-2922-4bdc-a690-ed091b9c7943", 00:18:49.825 "strip_size_kb": 64, 00:18:49.825 "state": "online", 00:18:49.825 "raid_level": "raid5f", 00:18:49.825 "superblock": true, 00:18:49.825 "num_base_bdevs": 4, 00:18:49.825 "num_base_bdevs_discovered": 4, 00:18:49.825 "num_base_bdevs_operational": 4, 00:18:49.825 "base_bdevs_list": [ 00:18:49.825 { 00:18:49.825 "name": "spare", 00:18:49.825 "uuid": "8ce8d9aa-88da-57dd-936e-4d122368f191", 00:18:49.825 "is_configured": true, 00:18:49.825 "data_offset": 2048, 00:18:49.825 "data_size": 63488 00:18:49.825 }, 00:18:49.825 { 00:18:49.825 "name": "BaseBdev2", 00:18:49.825 "uuid": "193666aa-bfd0-5ea4-a3c4-d3dfff993717", 00:18:49.825 "is_configured": true, 00:18:49.825 "data_offset": 2048, 00:18:49.825 "data_size": 63488 00:18:49.825 }, 00:18:49.825 { 00:18:49.825 "name": "BaseBdev3", 00:18:49.825 "uuid": "880a0627-ad2c-59f0-8bab-2ead3c6acb00", 00:18:49.825 "is_configured": true, 00:18:49.825 "data_offset": 2048, 00:18:49.825 "data_size": 63488 00:18:49.825 }, 00:18:49.825 { 00:18:49.825 "name": "BaseBdev4", 00:18:49.825 "uuid": "91d71167-4a07-5447-a21e-05017b731016", 00:18:49.825 "is_configured": true, 00:18:49.825 "data_offset": 2048, 00:18:49.825 "data_size": 63488 00:18:49.825 } 00:18:49.825 ] 00:18:49.825 }' 00:18:49.825 21:45:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:49.825 21:45:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:50.395 21:45:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:50.395 21:45:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.395 21:45:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:50.395 [2024-12-10 21:45:50.884577] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:50.395 [2024-12-10 21:45:50.884611] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:50.395 [2024-12-10 21:45:50.884697] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:50.395 [2024-12-10 21:45:50.884800] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:50.395 [2024-12-10 21:45:50.884824] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:50.395 21:45:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.395 21:45:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:50.395 21:45:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.396 21:45:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:18:50.396 21:45:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:50.396 21:45:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.396 21:45:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:18:50.396 21:45:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:18:50.396 21:45:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:18:50.396 21:45:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:18:50.396 21:45:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:50.396 21:45:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:18:50.396 21:45:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:50.396 21:45:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:50.396 21:45:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:50.396 21:45:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:18:50.396 21:45:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:50.396 21:45:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:50.396 21:45:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:18:50.396 /dev/nbd0 00:18:50.396 21:45:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:50.396 21:45:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:50.396 21:45:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:18:50.396 21:45:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:18:50.396 21:45:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:50.396 21:45:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:50.396 21:45:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:18:50.396 21:45:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:18:50.396 21:45:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:50.396 21:45:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:50.396 21:45:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:50.396 1+0 records in 00:18:50.396 1+0 records out 00:18:50.396 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00030356 s, 13.5 MB/s 00:18:50.656 21:45:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:50.656 21:45:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:18:50.656 21:45:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:50.656 21:45:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:50.656 21:45:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:18:50.656 21:45:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:50.656 21:45:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:50.656 21:45:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:18:50.656 /dev/nbd1 00:18:50.656 21:45:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:18:50.656 21:45:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:18:50.656 21:45:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:18:50.656 21:45:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:18:50.656 21:45:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:50.656 21:45:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:50.656 21:45:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:18:50.656 21:45:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:18:50.656 21:45:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:50.656 21:45:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:50.656 21:45:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:50.656 1+0 records in 00:18:50.656 1+0 records out 00:18:50.656 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000219028 s, 18.7 MB/s 00:18:50.656 21:45:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:50.656 21:45:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:18:50.656 21:45:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:50.656 21:45:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:50.656 21:45:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:18:50.656 21:45:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:50.656 21:45:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:50.656 21:45:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:18:50.914 21:45:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:18:50.914 21:45:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:50.914 21:45:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:50.914 21:45:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:50.914 21:45:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:18:50.914 21:45:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:50.914 21:45:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:51.173 21:45:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:51.173 21:45:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:51.173 21:45:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:51.173 21:45:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:51.173 21:45:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:51.173 21:45:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:51.173 21:45:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:18:51.173 21:45:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:18:51.173 21:45:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:51.173 21:45:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:18:51.432 21:45:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:18:51.432 21:45:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:18:51.432 21:45:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:18:51.432 21:45:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:51.432 21:45:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:51.432 21:45:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:18:51.432 21:45:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:18:51.432 21:45:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:18:51.432 21:45:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:18:51.432 21:45:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:18:51.432 21:45:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.432 21:45:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:51.432 21:45:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.432 21:45:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:51.432 21:45:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.432 21:45:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:51.432 [2024-12-10 21:45:52.031068] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:51.432 [2024-12-10 21:45:52.031127] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:51.432 [2024-12-10 21:45:52.031150] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:18:51.432 [2024-12-10 21:45:52.031160] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:51.432 [2024-12-10 21:45:52.033381] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:51.432 [2024-12-10 21:45:52.033472] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:51.432 [2024-12-10 21:45:52.033593] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:51.432 [2024-12-10 21:45:52.033645] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:51.432 [2024-12-10 21:45:52.033780] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:51.432 [2024-12-10 21:45:52.033882] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:51.432 [2024-12-10 21:45:52.033965] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:51.432 spare 00:18:51.432 21:45:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.432 21:45:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:18:51.432 21:45:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.432 21:45:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:51.432 [2024-12-10 21:45:52.133862] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:18:51.432 [2024-12-10 21:45:52.133891] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:18:51.432 [2024-12-10 21:45:52.134139] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000491d0 00:18:51.432 [2024-12-10 21:45:52.141340] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:18:51.432 [2024-12-10 21:45:52.141359] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:18:51.432 [2024-12-10 21:45:52.141534] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:51.432 21:45:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.432 21:45:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:18:51.432 21:45:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:51.432 21:45:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:51.432 21:45:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:51.432 21:45:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:51.432 21:45:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:51.432 21:45:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:51.432 21:45:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:51.432 21:45:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:51.432 21:45:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:51.432 21:45:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:51.432 21:45:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.432 21:45:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:51.432 21:45:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:51.432 21:45:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.432 21:45:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:51.432 "name": "raid_bdev1", 00:18:51.432 "uuid": "5e7d55c6-2922-4bdc-a690-ed091b9c7943", 00:18:51.432 "strip_size_kb": 64, 00:18:51.432 "state": "online", 00:18:51.432 "raid_level": "raid5f", 00:18:51.432 "superblock": true, 00:18:51.432 "num_base_bdevs": 4, 00:18:51.432 "num_base_bdevs_discovered": 4, 00:18:51.432 "num_base_bdevs_operational": 4, 00:18:51.432 "base_bdevs_list": [ 00:18:51.432 { 00:18:51.432 "name": "spare", 00:18:51.432 "uuid": "8ce8d9aa-88da-57dd-936e-4d122368f191", 00:18:51.432 "is_configured": true, 00:18:51.432 "data_offset": 2048, 00:18:51.432 "data_size": 63488 00:18:51.432 }, 00:18:51.432 { 00:18:51.432 "name": "BaseBdev2", 00:18:51.432 "uuid": "193666aa-bfd0-5ea4-a3c4-d3dfff993717", 00:18:51.432 "is_configured": true, 00:18:51.432 "data_offset": 2048, 00:18:51.432 "data_size": 63488 00:18:51.432 }, 00:18:51.432 { 00:18:51.432 "name": "BaseBdev3", 00:18:51.432 "uuid": "880a0627-ad2c-59f0-8bab-2ead3c6acb00", 00:18:51.432 "is_configured": true, 00:18:51.432 "data_offset": 2048, 00:18:51.432 "data_size": 63488 00:18:51.432 }, 00:18:51.432 { 00:18:51.432 "name": "BaseBdev4", 00:18:51.432 "uuid": "91d71167-4a07-5447-a21e-05017b731016", 00:18:51.432 "is_configured": true, 00:18:51.432 "data_offset": 2048, 00:18:51.432 "data_size": 63488 00:18:51.432 } 00:18:51.432 ] 00:18:51.432 }' 00:18:51.432 21:45:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:51.432 21:45:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:52.028 21:45:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:52.028 21:45:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:52.028 21:45:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:52.028 21:45:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:52.028 21:45:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:52.028 21:45:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:52.028 21:45:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:52.028 21:45:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.028 21:45:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:52.028 21:45:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.028 21:45:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:52.028 "name": "raid_bdev1", 00:18:52.028 "uuid": "5e7d55c6-2922-4bdc-a690-ed091b9c7943", 00:18:52.028 "strip_size_kb": 64, 00:18:52.028 "state": "online", 00:18:52.028 "raid_level": "raid5f", 00:18:52.028 "superblock": true, 00:18:52.028 "num_base_bdevs": 4, 00:18:52.028 "num_base_bdevs_discovered": 4, 00:18:52.028 "num_base_bdevs_operational": 4, 00:18:52.028 "base_bdevs_list": [ 00:18:52.028 { 00:18:52.028 "name": "spare", 00:18:52.028 "uuid": "8ce8d9aa-88da-57dd-936e-4d122368f191", 00:18:52.028 "is_configured": true, 00:18:52.029 "data_offset": 2048, 00:18:52.029 "data_size": 63488 00:18:52.029 }, 00:18:52.029 { 00:18:52.029 "name": "BaseBdev2", 00:18:52.029 "uuid": "193666aa-bfd0-5ea4-a3c4-d3dfff993717", 00:18:52.029 "is_configured": true, 00:18:52.029 "data_offset": 2048, 00:18:52.029 "data_size": 63488 00:18:52.029 }, 00:18:52.029 { 00:18:52.029 "name": "BaseBdev3", 00:18:52.029 "uuid": "880a0627-ad2c-59f0-8bab-2ead3c6acb00", 00:18:52.029 "is_configured": true, 00:18:52.029 "data_offset": 2048, 00:18:52.029 "data_size": 63488 00:18:52.029 }, 00:18:52.029 { 00:18:52.029 "name": "BaseBdev4", 00:18:52.029 "uuid": "91d71167-4a07-5447-a21e-05017b731016", 00:18:52.029 "is_configured": true, 00:18:52.029 "data_offset": 2048, 00:18:52.029 "data_size": 63488 00:18:52.029 } 00:18:52.029 ] 00:18:52.029 }' 00:18:52.029 21:45:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:52.029 21:45:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:52.029 21:45:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:52.029 21:45:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:52.029 21:45:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:52.029 21:45:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.029 21:45:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:52.029 21:45:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:18:52.029 21:45:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.029 21:45:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:18:52.029 21:45:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:52.029 21:45:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.029 21:45:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:52.029 [2024-12-10 21:45:52.764709] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:52.029 21:45:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.029 21:45:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:18:52.029 21:45:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:52.029 21:45:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:52.029 21:45:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:52.029 21:45:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:52.029 21:45:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:52.029 21:45:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:52.029 21:45:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:52.029 21:45:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:52.029 21:45:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:52.029 21:45:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:52.029 21:45:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.029 21:45:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:52.029 21:45:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:52.029 21:45:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.288 21:45:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:52.288 "name": "raid_bdev1", 00:18:52.288 "uuid": "5e7d55c6-2922-4bdc-a690-ed091b9c7943", 00:18:52.288 "strip_size_kb": 64, 00:18:52.288 "state": "online", 00:18:52.288 "raid_level": "raid5f", 00:18:52.288 "superblock": true, 00:18:52.288 "num_base_bdevs": 4, 00:18:52.288 "num_base_bdevs_discovered": 3, 00:18:52.288 "num_base_bdevs_operational": 3, 00:18:52.288 "base_bdevs_list": [ 00:18:52.288 { 00:18:52.288 "name": null, 00:18:52.288 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:52.288 "is_configured": false, 00:18:52.288 "data_offset": 0, 00:18:52.288 "data_size": 63488 00:18:52.288 }, 00:18:52.288 { 00:18:52.288 "name": "BaseBdev2", 00:18:52.288 "uuid": "193666aa-bfd0-5ea4-a3c4-d3dfff993717", 00:18:52.288 "is_configured": true, 00:18:52.288 "data_offset": 2048, 00:18:52.288 "data_size": 63488 00:18:52.288 }, 00:18:52.288 { 00:18:52.288 "name": "BaseBdev3", 00:18:52.288 "uuid": "880a0627-ad2c-59f0-8bab-2ead3c6acb00", 00:18:52.288 "is_configured": true, 00:18:52.288 "data_offset": 2048, 00:18:52.288 "data_size": 63488 00:18:52.288 }, 00:18:52.288 { 00:18:52.288 "name": "BaseBdev4", 00:18:52.288 "uuid": "91d71167-4a07-5447-a21e-05017b731016", 00:18:52.288 "is_configured": true, 00:18:52.288 "data_offset": 2048, 00:18:52.288 "data_size": 63488 00:18:52.288 } 00:18:52.288 ] 00:18:52.288 }' 00:18:52.288 21:45:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:52.289 21:45:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:52.548 21:45:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:52.548 21:45:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.548 21:45:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:52.548 [2024-12-10 21:45:53.152241] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:52.548 [2024-12-10 21:45:53.152510] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:52.548 [2024-12-10 21:45:53.152587] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:52.548 [2024-12-10 21:45:53.152652] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:52.548 [2024-12-10 21:45:53.167170] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000492a0 00:18:52.548 21:45:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.548 21:45:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:18:52.548 [2024-12-10 21:45:53.175913] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:53.487 21:45:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:53.487 21:45:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:53.487 21:45:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:53.487 21:45:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:53.487 21:45:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:53.487 21:45:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:53.487 21:45:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:53.487 21:45:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.487 21:45:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:53.487 21:45:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.487 21:45:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:53.487 "name": "raid_bdev1", 00:18:53.487 "uuid": "5e7d55c6-2922-4bdc-a690-ed091b9c7943", 00:18:53.487 "strip_size_kb": 64, 00:18:53.487 "state": "online", 00:18:53.487 "raid_level": "raid5f", 00:18:53.487 "superblock": true, 00:18:53.487 "num_base_bdevs": 4, 00:18:53.487 "num_base_bdevs_discovered": 4, 00:18:53.487 "num_base_bdevs_operational": 4, 00:18:53.487 "process": { 00:18:53.487 "type": "rebuild", 00:18:53.487 "target": "spare", 00:18:53.487 "progress": { 00:18:53.487 "blocks": 19200, 00:18:53.487 "percent": 10 00:18:53.487 } 00:18:53.487 }, 00:18:53.487 "base_bdevs_list": [ 00:18:53.487 { 00:18:53.487 "name": "spare", 00:18:53.487 "uuid": "8ce8d9aa-88da-57dd-936e-4d122368f191", 00:18:53.487 "is_configured": true, 00:18:53.487 "data_offset": 2048, 00:18:53.487 "data_size": 63488 00:18:53.487 }, 00:18:53.487 { 00:18:53.487 "name": "BaseBdev2", 00:18:53.487 "uuid": "193666aa-bfd0-5ea4-a3c4-d3dfff993717", 00:18:53.487 "is_configured": true, 00:18:53.487 "data_offset": 2048, 00:18:53.487 "data_size": 63488 00:18:53.487 }, 00:18:53.487 { 00:18:53.487 "name": "BaseBdev3", 00:18:53.487 "uuid": "880a0627-ad2c-59f0-8bab-2ead3c6acb00", 00:18:53.487 "is_configured": true, 00:18:53.487 "data_offset": 2048, 00:18:53.487 "data_size": 63488 00:18:53.488 }, 00:18:53.488 { 00:18:53.488 "name": "BaseBdev4", 00:18:53.488 "uuid": "91d71167-4a07-5447-a21e-05017b731016", 00:18:53.488 "is_configured": true, 00:18:53.488 "data_offset": 2048, 00:18:53.488 "data_size": 63488 00:18:53.488 } 00:18:53.488 ] 00:18:53.488 }' 00:18:53.488 21:45:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:53.747 21:45:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:53.747 21:45:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:53.747 21:45:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:53.747 21:45:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:18:53.747 21:45:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.747 21:45:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:53.747 [2024-12-10 21:45:54.315092] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:53.747 [2024-12-10 21:45:54.382516] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:53.747 [2024-12-10 21:45:54.382647] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:53.747 [2024-12-10 21:45:54.382686] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:53.747 [2024-12-10 21:45:54.382711] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:53.747 21:45:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.747 21:45:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:18:53.747 21:45:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:53.747 21:45:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:53.747 21:45:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:53.747 21:45:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:53.747 21:45:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:53.747 21:45:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:53.747 21:45:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:53.747 21:45:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:53.747 21:45:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:53.747 21:45:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:53.747 21:45:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.748 21:45:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:53.748 21:45:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:53.748 21:45:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.748 21:45:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:53.748 "name": "raid_bdev1", 00:18:53.748 "uuid": "5e7d55c6-2922-4bdc-a690-ed091b9c7943", 00:18:53.748 "strip_size_kb": 64, 00:18:53.748 "state": "online", 00:18:53.748 "raid_level": "raid5f", 00:18:53.748 "superblock": true, 00:18:53.748 "num_base_bdevs": 4, 00:18:53.748 "num_base_bdevs_discovered": 3, 00:18:53.748 "num_base_bdevs_operational": 3, 00:18:53.748 "base_bdevs_list": [ 00:18:53.748 { 00:18:53.748 "name": null, 00:18:53.748 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:53.748 "is_configured": false, 00:18:53.748 "data_offset": 0, 00:18:53.748 "data_size": 63488 00:18:53.748 }, 00:18:53.748 { 00:18:53.748 "name": "BaseBdev2", 00:18:53.748 "uuid": "193666aa-bfd0-5ea4-a3c4-d3dfff993717", 00:18:53.748 "is_configured": true, 00:18:53.748 "data_offset": 2048, 00:18:53.748 "data_size": 63488 00:18:53.748 }, 00:18:53.748 { 00:18:53.748 "name": "BaseBdev3", 00:18:53.748 "uuid": "880a0627-ad2c-59f0-8bab-2ead3c6acb00", 00:18:53.748 "is_configured": true, 00:18:53.748 "data_offset": 2048, 00:18:53.748 "data_size": 63488 00:18:53.748 }, 00:18:53.748 { 00:18:53.748 "name": "BaseBdev4", 00:18:53.748 "uuid": "91d71167-4a07-5447-a21e-05017b731016", 00:18:53.748 "is_configured": true, 00:18:53.748 "data_offset": 2048, 00:18:53.748 "data_size": 63488 00:18:53.748 } 00:18:53.748 ] 00:18:53.748 }' 00:18:53.748 21:45:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:53.748 21:45:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:54.338 21:45:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:54.338 21:45:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.338 21:45:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:54.338 [2024-12-10 21:45:54.868119] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:54.338 [2024-12-10 21:45:54.868189] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:54.338 [2024-12-10 21:45:54.868215] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:18:54.338 [2024-12-10 21:45:54.868227] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:54.338 [2024-12-10 21:45:54.868762] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:54.338 [2024-12-10 21:45:54.868784] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:54.338 [2024-12-10 21:45:54.868882] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:54.338 [2024-12-10 21:45:54.868897] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:54.338 [2024-12-10 21:45:54.868907] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:54.338 [2024-12-10 21:45:54.868931] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:54.338 [2024-12-10 21:45:54.884362] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000049370 00:18:54.338 spare 00:18:54.338 21:45:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.338 21:45:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:18:54.338 [2024-12-10 21:45:54.894127] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:55.346 21:45:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:55.346 21:45:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:55.346 21:45:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:55.346 21:45:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:55.346 21:45:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:55.346 21:45:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:55.346 21:45:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.346 21:45:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:55.346 21:45:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:55.346 21:45:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.346 21:45:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:55.346 "name": "raid_bdev1", 00:18:55.346 "uuid": "5e7d55c6-2922-4bdc-a690-ed091b9c7943", 00:18:55.346 "strip_size_kb": 64, 00:18:55.346 "state": "online", 00:18:55.346 "raid_level": "raid5f", 00:18:55.346 "superblock": true, 00:18:55.346 "num_base_bdevs": 4, 00:18:55.346 "num_base_bdevs_discovered": 4, 00:18:55.346 "num_base_bdevs_operational": 4, 00:18:55.346 "process": { 00:18:55.346 "type": "rebuild", 00:18:55.346 "target": "spare", 00:18:55.346 "progress": { 00:18:55.346 "blocks": 19200, 00:18:55.346 "percent": 10 00:18:55.346 } 00:18:55.346 }, 00:18:55.346 "base_bdevs_list": [ 00:18:55.346 { 00:18:55.346 "name": "spare", 00:18:55.346 "uuid": "8ce8d9aa-88da-57dd-936e-4d122368f191", 00:18:55.346 "is_configured": true, 00:18:55.346 "data_offset": 2048, 00:18:55.346 "data_size": 63488 00:18:55.346 }, 00:18:55.346 { 00:18:55.346 "name": "BaseBdev2", 00:18:55.346 "uuid": "193666aa-bfd0-5ea4-a3c4-d3dfff993717", 00:18:55.346 "is_configured": true, 00:18:55.346 "data_offset": 2048, 00:18:55.346 "data_size": 63488 00:18:55.346 }, 00:18:55.346 { 00:18:55.346 "name": "BaseBdev3", 00:18:55.346 "uuid": "880a0627-ad2c-59f0-8bab-2ead3c6acb00", 00:18:55.346 "is_configured": true, 00:18:55.346 "data_offset": 2048, 00:18:55.346 "data_size": 63488 00:18:55.346 }, 00:18:55.346 { 00:18:55.346 "name": "BaseBdev4", 00:18:55.346 "uuid": "91d71167-4a07-5447-a21e-05017b731016", 00:18:55.346 "is_configured": true, 00:18:55.346 "data_offset": 2048, 00:18:55.346 "data_size": 63488 00:18:55.346 } 00:18:55.346 ] 00:18:55.346 }' 00:18:55.346 21:45:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:55.346 21:45:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:55.346 21:45:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:55.346 21:45:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:55.346 21:45:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:18:55.346 21:45:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.346 21:45:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:55.346 [2024-12-10 21:45:56.040899] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:55.346 [2024-12-10 21:45:56.100705] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:55.346 [2024-12-10 21:45:56.100765] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:55.346 [2024-12-10 21:45:56.100785] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:55.346 [2024-12-10 21:45:56.100792] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:55.605 21:45:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.605 21:45:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:18:55.605 21:45:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:55.605 21:45:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:55.605 21:45:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:55.605 21:45:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:55.605 21:45:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:55.605 21:45:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:55.605 21:45:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:55.606 21:45:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:55.606 21:45:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:55.606 21:45:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:55.606 21:45:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:55.606 21:45:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.606 21:45:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:55.606 21:45:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.606 21:45:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:55.606 "name": "raid_bdev1", 00:18:55.606 "uuid": "5e7d55c6-2922-4bdc-a690-ed091b9c7943", 00:18:55.606 "strip_size_kb": 64, 00:18:55.606 "state": "online", 00:18:55.606 "raid_level": "raid5f", 00:18:55.606 "superblock": true, 00:18:55.606 "num_base_bdevs": 4, 00:18:55.606 "num_base_bdevs_discovered": 3, 00:18:55.606 "num_base_bdevs_operational": 3, 00:18:55.606 "base_bdevs_list": [ 00:18:55.606 { 00:18:55.606 "name": null, 00:18:55.606 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:55.606 "is_configured": false, 00:18:55.606 "data_offset": 0, 00:18:55.606 "data_size": 63488 00:18:55.606 }, 00:18:55.606 { 00:18:55.606 "name": "BaseBdev2", 00:18:55.606 "uuid": "193666aa-bfd0-5ea4-a3c4-d3dfff993717", 00:18:55.606 "is_configured": true, 00:18:55.606 "data_offset": 2048, 00:18:55.606 "data_size": 63488 00:18:55.606 }, 00:18:55.606 { 00:18:55.606 "name": "BaseBdev3", 00:18:55.606 "uuid": "880a0627-ad2c-59f0-8bab-2ead3c6acb00", 00:18:55.606 "is_configured": true, 00:18:55.606 "data_offset": 2048, 00:18:55.606 "data_size": 63488 00:18:55.606 }, 00:18:55.606 { 00:18:55.606 "name": "BaseBdev4", 00:18:55.606 "uuid": "91d71167-4a07-5447-a21e-05017b731016", 00:18:55.606 "is_configured": true, 00:18:55.606 "data_offset": 2048, 00:18:55.606 "data_size": 63488 00:18:55.606 } 00:18:55.606 ] 00:18:55.606 }' 00:18:55.606 21:45:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:55.606 21:45:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:55.865 21:45:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:55.865 21:45:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:55.865 21:45:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:55.865 21:45:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:55.865 21:45:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:55.865 21:45:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:55.865 21:45:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.865 21:45:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:55.865 21:45:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:55.865 21:45:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.866 21:45:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:55.866 "name": "raid_bdev1", 00:18:55.866 "uuid": "5e7d55c6-2922-4bdc-a690-ed091b9c7943", 00:18:55.866 "strip_size_kb": 64, 00:18:55.866 "state": "online", 00:18:55.866 "raid_level": "raid5f", 00:18:55.866 "superblock": true, 00:18:55.866 "num_base_bdevs": 4, 00:18:55.866 "num_base_bdevs_discovered": 3, 00:18:55.866 "num_base_bdevs_operational": 3, 00:18:55.866 "base_bdevs_list": [ 00:18:55.866 { 00:18:55.866 "name": null, 00:18:55.866 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:55.866 "is_configured": false, 00:18:55.866 "data_offset": 0, 00:18:55.866 "data_size": 63488 00:18:55.866 }, 00:18:55.866 { 00:18:55.866 "name": "BaseBdev2", 00:18:55.866 "uuid": "193666aa-bfd0-5ea4-a3c4-d3dfff993717", 00:18:55.866 "is_configured": true, 00:18:55.866 "data_offset": 2048, 00:18:55.866 "data_size": 63488 00:18:55.866 }, 00:18:55.866 { 00:18:55.866 "name": "BaseBdev3", 00:18:55.866 "uuid": "880a0627-ad2c-59f0-8bab-2ead3c6acb00", 00:18:55.866 "is_configured": true, 00:18:55.866 "data_offset": 2048, 00:18:55.866 "data_size": 63488 00:18:55.866 }, 00:18:55.866 { 00:18:55.866 "name": "BaseBdev4", 00:18:55.866 "uuid": "91d71167-4a07-5447-a21e-05017b731016", 00:18:55.866 "is_configured": true, 00:18:55.866 "data_offset": 2048, 00:18:55.866 "data_size": 63488 00:18:55.866 } 00:18:55.866 ] 00:18:55.866 }' 00:18:55.866 21:45:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:56.125 21:45:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:56.125 21:45:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:56.125 21:45:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:56.125 21:45:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:18:56.125 21:45:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.125 21:45:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:56.125 21:45:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.125 21:45:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:56.125 21:45:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.125 21:45:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:56.125 [2024-12-10 21:45:56.734633] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:56.125 [2024-12-10 21:45:56.734686] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:56.125 [2024-12-10 21:45:56.734708] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:18:56.125 [2024-12-10 21:45:56.734718] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:56.125 [2024-12-10 21:45:56.735151] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:56.125 [2024-12-10 21:45:56.735170] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:56.125 [2024-12-10 21:45:56.735246] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:18:56.125 [2024-12-10 21:45:56.735259] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:56.125 [2024-12-10 21:45:56.735269] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:56.125 [2024-12-10 21:45:56.735279] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:18:56.125 BaseBdev1 00:18:56.125 21:45:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.125 21:45:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:18:57.064 21:45:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:18:57.064 21:45:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:57.064 21:45:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:57.064 21:45:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:57.064 21:45:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:57.064 21:45:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:57.064 21:45:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:57.064 21:45:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:57.064 21:45:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:57.064 21:45:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:57.064 21:45:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:57.064 21:45:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.064 21:45:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:57.064 21:45:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:57.064 21:45:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.064 21:45:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:57.064 "name": "raid_bdev1", 00:18:57.064 "uuid": "5e7d55c6-2922-4bdc-a690-ed091b9c7943", 00:18:57.064 "strip_size_kb": 64, 00:18:57.064 "state": "online", 00:18:57.064 "raid_level": "raid5f", 00:18:57.064 "superblock": true, 00:18:57.064 "num_base_bdevs": 4, 00:18:57.064 "num_base_bdevs_discovered": 3, 00:18:57.064 "num_base_bdevs_operational": 3, 00:18:57.064 "base_bdevs_list": [ 00:18:57.064 { 00:18:57.064 "name": null, 00:18:57.064 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:57.064 "is_configured": false, 00:18:57.064 "data_offset": 0, 00:18:57.064 "data_size": 63488 00:18:57.064 }, 00:18:57.064 { 00:18:57.064 "name": "BaseBdev2", 00:18:57.064 "uuid": "193666aa-bfd0-5ea4-a3c4-d3dfff993717", 00:18:57.064 "is_configured": true, 00:18:57.064 "data_offset": 2048, 00:18:57.064 "data_size": 63488 00:18:57.064 }, 00:18:57.064 { 00:18:57.064 "name": "BaseBdev3", 00:18:57.064 "uuid": "880a0627-ad2c-59f0-8bab-2ead3c6acb00", 00:18:57.064 "is_configured": true, 00:18:57.064 "data_offset": 2048, 00:18:57.064 "data_size": 63488 00:18:57.064 }, 00:18:57.064 { 00:18:57.064 "name": "BaseBdev4", 00:18:57.064 "uuid": "91d71167-4a07-5447-a21e-05017b731016", 00:18:57.064 "is_configured": true, 00:18:57.064 "data_offset": 2048, 00:18:57.064 "data_size": 63488 00:18:57.064 } 00:18:57.064 ] 00:18:57.064 }' 00:18:57.064 21:45:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:57.064 21:45:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:57.634 21:45:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:57.634 21:45:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:57.634 21:45:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:57.634 21:45:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:57.634 21:45:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:57.634 21:45:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:57.634 21:45:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:57.634 21:45:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.634 21:45:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:57.634 21:45:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.634 21:45:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:57.634 "name": "raid_bdev1", 00:18:57.634 "uuid": "5e7d55c6-2922-4bdc-a690-ed091b9c7943", 00:18:57.634 "strip_size_kb": 64, 00:18:57.634 "state": "online", 00:18:57.634 "raid_level": "raid5f", 00:18:57.634 "superblock": true, 00:18:57.634 "num_base_bdevs": 4, 00:18:57.634 "num_base_bdevs_discovered": 3, 00:18:57.634 "num_base_bdevs_operational": 3, 00:18:57.634 "base_bdevs_list": [ 00:18:57.634 { 00:18:57.634 "name": null, 00:18:57.634 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:57.634 "is_configured": false, 00:18:57.634 "data_offset": 0, 00:18:57.634 "data_size": 63488 00:18:57.634 }, 00:18:57.634 { 00:18:57.634 "name": "BaseBdev2", 00:18:57.634 "uuid": "193666aa-bfd0-5ea4-a3c4-d3dfff993717", 00:18:57.634 "is_configured": true, 00:18:57.634 "data_offset": 2048, 00:18:57.634 "data_size": 63488 00:18:57.634 }, 00:18:57.634 { 00:18:57.634 "name": "BaseBdev3", 00:18:57.634 "uuid": "880a0627-ad2c-59f0-8bab-2ead3c6acb00", 00:18:57.634 "is_configured": true, 00:18:57.634 "data_offset": 2048, 00:18:57.634 "data_size": 63488 00:18:57.634 }, 00:18:57.634 { 00:18:57.634 "name": "BaseBdev4", 00:18:57.634 "uuid": "91d71167-4a07-5447-a21e-05017b731016", 00:18:57.634 "is_configured": true, 00:18:57.634 "data_offset": 2048, 00:18:57.634 "data_size": 63488 00:18:57.634 } 00:18:57.634 ] 00:18:57.634 }' 00:18:57.634 21:45:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:57.634 21:45:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:57.634 21:45:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:57.634 21:45:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:57.634 21:45:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:57.634 21:45:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:18:57.634 21:45:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:57.634 21:45:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:18:57.634 21:45:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:57.634 21:45:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:18:57.634 21:45:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:57.634 21:45:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:57.634 21:45:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.634 21:45:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:57.634 [2024-12-10 21:45:58.280238] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:57.634 [2024-12-10 21:45:58.280430] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:57.634 [2024-12-10 21:45:58.280448] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:57.634 request: 00:18:57.634 { 00:18:57.634 "base_bdev": "BaseBdev1", 00:18:57.634 "raid_bdev": "raid_bdev1", 00:18:57.634 "method": "bdev_raid_add_base_bdev", 00:18:57.634 "req_id": 1 00:18:57.634 } 00:18:57.634 Got JSON-RPC error response 00:18:57.634 response: 00:18:57.634 { 00:18:57.634 "code": -22, 00:18:57.634 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:18:57.634 } 00:18:57.634 21:45:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:18:57.634 21:45:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:18:57.634 21:45:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:57.634 21:45:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:57.634 21:45:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:57.634 21:45:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:18:58.577 21:45:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:18:58.577 21:45:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:58.577 21:45:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:58.577 21:45:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:58.577 21:45:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:58.577 21:45:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:58.577 21:45:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:58.577 21:45:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:58.577 21:45:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:58.577 21:45:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:58.577 21:45:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:58.577 21:45:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:58.577 21:45:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.577 21:45:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:58.577 21:45:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.577 21:45:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:58.577 "name": "raid_bdev1", 00:18:58.577 "uuid": "5e7d55c6-2922-4bdc-a690-ed091b9c7943", 00:18:58.577 "strip_size_kb": 64, 00:18:58.577 "state": "online", 00:18:58.577 "raid_level": "raid5f", 00:18:58.577 "superblock": true, 00:18:58.577 "num_base_bdevs": 4, 00:18:58.577 "num_base_bdevs_discovered": 3, 00:18:58.577 "num_base_bdevs_operational": 3, 00:18:58.577 "base_bdevs_list": [ 00:18:58.578 { 00:18:58.578 "name": null, 00:18:58.578 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:58.578 "is_configured": false, 00:18:58.578 "data_offset": 0, 00:18:58.578 "data_size": 63488 00:18:58.578 }, 00:18:58.578 { 00:18:58.578 "name": "BaseBdev2", 00:18:58.578 "uuid": "193666aa-bfd0-5ea4-a3c4-d3dfff993717", 00:18:58.578 "is_configured": true, 00:18:58.578 "data_offset": 2048, 00:18:58.578 "data_size": 63488 00:18:58.578 }, 00:18:58.578 { 00:18:58.578 "name": "BaseBdev3", 00:18:58.578 "uuid": "880a0627-ad2c-59f0-8bab-2ead3c6acb00", 00:18:58.578 "is_configured": true, 00:18:58.578 "data_offset": 2048, 00:18:58.578 "data_size": 63488 00:18:58.578 }, 00:18:58.578 { 00:18:58.578 "name": "BaseBdev4", 00:18:58.578 "uuid": "91d71167-4a07-5447-a21e-05017b731016", 00:18:58.578 "is_configured": true, 00:18:58.578 "data_offset": 2048, 00:18:58.578 "data_size": 63488 00:18:58.578 } 00:18:58.578 ] 00:18:58.578 }' 00:18:58.578 21:45:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:58.578 21:45:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:59.147 21:45:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:59.147 21:45:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:59.147 21:45:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:59.147 21:45:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:59.147 21:45:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:59.147 21:45:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:59.147 21:45:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:59.147 21:45:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.147 21:45:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:59.147 21:45:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.147 21:45:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:59.147 "name": "raid_bdev1", 00:18:59.147 "uuid": "5e7d55c6-2922-4bdc-a690-ed091b9c7943", 00:18:59.147 "strip_size_kb": 64, 00:18:59.147 "state": "online", 00:18:59.147 "raid_level": "raid5f", 00:18:59.147 "superblock": true, 00:18:59.147 "num_base_bdevs": 4, 00:18:59.147 "num_base_bdevs_discovered": 3, 00:18:59.147 "num_base_bdevs_operational": 3, 00:18:59.147 "base_bdevs_list": [ 00:18:59.147 { 00:18:59.147 "name": null, 00:18:59.147 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:59.147 "is_configured": false, 00:18:59.147 "data_offset": 0, 00:18:59.147 "data_size": 63488 00:18:59.147 }, 00:18:59.147 { 00:18:59.147 "name": "BaseBdev2", 00:18:59.147 "uuid": "193666aa-bfd0-5ea4-a3c4-d3dfff993717", 00:18:59.147 "is_configured": true, 00:18:59.147 "data_offset": 2048, 00:18:59.147 "data_size": 63488 00:18:59.147 }, 00:18:59.147 { 00:18:59.147 "name": "BaseBdev3", 00:18:59.147 "uuid": "880a0627-ad2c-59f0-8bab-2ead3c6acb00", 00:18:59.147 "is_configured": true, 00:18:59.147 "data_offset": 2048, 00:18:59.147 "data_size": 63488 00:18:59.147 }, 00:18:59.147 { 00:18:59.147 "name": "BaseBdev4", 00:18:59.147 "uuid": "91d71167-4a07-5447-a21e-05017b731016", 00:18:59.147 "is_configured": true, 00:18:59.147 "data_offset": 2048, 00:18:59.147 "data_size": 63488 00:18:59.147 } 00:18:59.147 ] 00:18:59.147 }' 00:18:59.147 21:45:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:59.147 21:45:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:59.147 21:45:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:59.147 21:45:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:59.147 21:45:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 85294 00:18:59.147 21:45:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 85294 ']' 00:18:59.147 21:45:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 85294 00:18:59.147 21:45:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:18:59.147 21:45:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:59.147 21:45:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85294 00:18:59.147 killing process with pid 85294 00:18:59.147 Received shutdown signal, test time was about 60.000000 seconds 00:18:59.147 00:18:59.147 Latency(us) 00:18:59.147 [2024-12-10T21:45:59.930Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:59.147 [2024-12-10T21:45:59.930Z] =================================================================================================================== 00:18:59.147 [2024-12-10T21:45:59.930Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:59.147 21:45:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:59.147 21:45:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:59.147 21:45:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85294' 00:18:59.147 21:45:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 85294 00:18:59.147 [2024-12-10 21:45:59.842628] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:59.147 [2024-12-10 21:45:59.842746] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:59.147 21:45:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 85294 00:18:59.147 [2024-12-10 21:45:59.842822] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:59.147 [2024-12-10 21:45:59.842833] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:18:59.716 [2024-12-10 21:46:00.310815] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:00.655 21:46:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:19:00.655 00:19:00.655 real 0m26.322s 00:19:00.655 user 0m32.902s 00:19:00.655 sys 0m2.696s 00:19:00.655 21:46:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:00.655 21:46:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:00.655 ************************************ 00:19:00.655 END TEST raid5f_rebuild_test_sb 00:19:00.655 ************************************ 00:19:00.655 21:46:01 bdev_raid -- bdev/bdev_raid.sh@995 -- # base_blocklen=4096 00:19:00.655 21:46:01 bdev_raid -- bdev/bdev_raid.sh@997 -- # run_test raid_state_function_test_sb_4k raid_state_function_test raid1 2 true 00:19:00.655 21:46:01 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:19:00.655 21:46:01 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:00.655 21:46:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:00.915 ************************************ 00:19:00.915 START TEST raid_state_function_test_sb_4k 00:19:00.915 ************************************ 00:19:00.915 21:46:01 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:19:00.915 21:46:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:19:00.915 21:46:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:19:00.915 21:46:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:19:00.915 21:46:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:19:00.915 21:46:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:19:00.915 21:46:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:00.915 21:46:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:19:00.915 21:46:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:19:00.915 21:46:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:00.915 21:46:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:19:00.915 21:46:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:19:00.915 21:46:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:00.915 21:46:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:19:00.915 21:46:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:19:00.915 21:46:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:19:00.915 21:46:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # local strip_size 00:19:00.915 21:46:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:19:00.915 21:46:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:19:00.915 21:46:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:19:00.915 21:46:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:19:00.915 21:46:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:19:00.915 21:46:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:19:00.915 21:46:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@229 -- # raid_pid=86099 00:19:00.915 21:46:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:19:00.915 21:46:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 86099' 00:19:00.915 Process raid pid: 86099 00:19:00.915 21:46:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@231 -- # waitforlisten 86099 00:19:00.915 21:46:01 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@835 -- # '[' -z 86099 ']' 00:19:00.915 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:00.915 21:46:01 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:00.915 21:46:01 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:00.915 21:46:01 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:00.915 21:46:01 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:00.915 21:46:01 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:00.915 [2024-12-10 21:46:01.542998] Starting SPDK v25.01-pre git sha1 cec5ba284 / DPDK 24.03.0 initialization... 00:19:00.915 [2024-12-10 21:46:01.543110] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:01.174 [2024-12-10 21:46:01.714858] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:01.174 [2024-12-10 21:46:01.821920] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:19:01.433 [2024-12-10 21:46:02.031978] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:01.433 [2024-12-10 21:46:02.032009] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:01.692 21:46:02 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:01.693 21:46:02 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@868 -- # return 0 00:19:01.693 21:46:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:19:01.693 21:46:02 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.693 21:46:02 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:01.693 [2024-12-10 21:46:02.363832] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:01.693 [2024-12-10 21:46:02.363885] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:01.693 [2024-12-10 21:46:02.363895] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:01.693 [2024-12-10 21:46:02.363904] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:01.693 21:46:02 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.693 21:46:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:19:01.693 21:46:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:01.693 21:46:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:01.693 21:46:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:01.693 21:46:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:01.693 21:46:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:01.693 21:46:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:01.693 21:46:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:01.693 21:46:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:01.693 21:46:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:01.693 21:46:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:01.693 21:46:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:01.693 21:46:02 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.693 21:46:02 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:01.693 21:46:02 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.693 21:46:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:01.693 "name": "Existed_Raid", 00:19:01.693 "uuid": "a9e63aa1-835a-40ff-912f-9bfe4568ba3e", 00:19:01.693 "strip_size_kb": 0, 00:19:01.693 "state": "configuring", 00:19:01.693 "raid_level": "raid1", 00:19:01.693 "superblock": true, 00:19:01.693 "num_base_bdevs": 2, 00:19:01.693 "num_base_bdevs_discovered": 0, 00:19:01.693 "num_base_bdevs_operational": 2, 00:19:01.693 "base_bdevs_list": [ 00:19:01.693 { 00:19:01.693 "name": "BaseBdev1", 00:19:01.693 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:01.693 "is_configured": false, 00:19:01.693 "data_offset": 0, 00:19:01.693 "data_size": 0 00:19:01.693 }, 00:19:01.693 { 00:19:01.693 "name": "BaseBdev2", 00:19:01.693 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:01.693 "is_configured": false, 00:19:01.693 "data_offset": 0, 00:19:01.693 "data_size": 0 00:19:01.693 } 00:19:01.693 ] 00:19:01.693 }' 00:19:01.693 21:46:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:01.693 21:46:02 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:02.263 21:46:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:19:02.263 21:46:02 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.263 21:46:02 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:02.263 [2024-12-10 21:46:02.807016] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:02.263 [2024-12-10 21:46:02.807108] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:19:02.263 21:46:02 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.263 21:46:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:19:02.263 21:46:02 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.263 21:46:02 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:02.263 [2024-12-10 21:46:02.818989] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:02.263 [2024-12-10 21:46:02.819069] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:02.263 [2024-12-10 21:46:02.819097] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:02.263 [2024-12-10 21:46:02.819121] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:02.263 21:46:02 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.263 21:46:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1 00:19:02.263 21:46:02 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.263 21:46:02 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:02.263 [2024-12-10 21:46:02.867002] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:02.263 BaseBdev1 00:19:02.263 21:46:02 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.263 21:46:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:19:02.263 21:46:02 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:19:02.263 21:46:02 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:02.263 21:46:02 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@905 -- # local i 00:19:02.263 21:46:02 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:02.263 21:46:02 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:02.263 21:46:02 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:19:02.263 21:46:02 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.263 21:46:02 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:02.263 21:46:02 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.263 21:46:02 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:02.263 21:46:02 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.263 21:46:02 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:02.263 [ 00:19:02.263 { 00:19:02.263 "name": "BaseBdev1", 00:19:02.263 "aliases": [ 00:19:02.263 "26922b31-abba-46fc-b574-f393b536aef8" 00:19:02.263 ], 00:19:02.263 "product_name": "Malloc disk", 00:19:02.263 "block_size": 4096, 00:19:02.263 "num_blocks": 8192, 00:19:02.263 "uuid": "26922b31-abba-46fc-b574-f393b536aef8", 00:19:02.263 "assigned_rate_limits": { 00:19:02.263 "rw_ios_per_sec": 0, 00:19:02.263 "rw_mbytes_per_sec": 0, 00:19:02.263 "r_mbytes_per_sec": 0, 00:19:02.263 "w_mbytes_per_sec": 0 00:19:02.263 }, 00:19:02.263 "claimed": true, 00:19:02.263 "claim_type": "exclusive_write", 00:19:02.263 "zoned": false, 00:19:02.263 "supported_io_types": { 00:19:02.263 "read": true, 00:19:02.263 "write": true, 00:19:02.263 "unmap": true, 00:19:02.263 "flush": true, 00:19:02.263 "reset": true, 00:19:02.263 "nvme_admin": false, 00:19:02.263 "nvme_io": false, 00:19:02.263 "nvme_io_md": false, 00:19:02.263 "write_zeroes": true, 00:19:02.263 "zcopy": true, 00:19:02.263 "get_zone_info": false, 00:19:02.263 "zone_management": false, 00:19:02.263 "zone_append": false, 00:19:02.263 "compare": false, 00:19:02.263 "compare_and_write": false, 00:19:02.263 "abort": true, 00:19:02.263 "seek_hole": false, 00:19:02.263 "seek_data": false, 00:19:02.263 "copy": true, 00:19:02.263 "nvme_iov_md": false 00:19:02.263 }, 00:19:02.263 "memory_domains": [ 00:19:02.263 { 00:19:02.263 "dma_device_id": "system", 00:19:02.263 "dma_device_type": 1 00:19:02.263 }, 00:19:02.263 { 00:19:02.263 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:02.263 "dma_device_type": 2 00:19:02.263 } 00:19:02.263 ], 00:19:02.263 "driver_specific": {} 00:19:02.263 } 00:19:02.263 ] 00:19:02.263 21:46:02 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.263 21:46:02 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@911 -- # return 0 00:19:02.263 21:46:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:19:02.263 21:46:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:02.263 21:46:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:02.263 21:46:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:02.263 21:46:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:02.263 21:46:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:02.263 21:46:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:02.263 21:46:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:02.263 21:46:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:02.263 21:46:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:02.263 21:46:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:02.263 21:46:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:02.263 21:46:02 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.263 21:46:02 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:02.263 21:46:02 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.263 21:46:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:02.263 "name": "Existed_Raid", 00:19:02.263 "uuid": "25f1c5c4-228d-416d-98d1-73e669291d40", 00:19:02.263 "strip_size_kb": 0, 00:19:02.263 "state": "configuring", 00:19:02.263 "raid_level": "raid1", 00:19:02.263 "superblock": true, 00:19:02.263 "num_base_bdevs": 2, 00:19:02.263 "num_base_bdevs_discovered": 1, 00:19:02.263 "num_base_bdevs_operational": 2, 00:19:02.263 "base_bdevs_list": [ 00:19:02.263 { 00:19:02.263 "name": "BaseBdev1", 00:19:02.263 "uuid": "26922b31-abba-46fc-b574-f393b536aef8", 00:19:02.263 "is_configured": true, 00:19:02.263 "data_offset": 256, 00:19:02.263 "data_size": 7936 00:19:02.263 }, 00:19:02.263 { 00:19:02.263 "name": "BaseBdev2", 00:19:02.263 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:02.263 "is_configured": false, 00:19:02.263 "data_offset": 0, 00:19:02.263 "data_size": 0 00:19:02.263 } 00:19:02.263 ] 00:19:02.263 }' 00:19:02.263 21:46:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:02.263 21:46:02 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:02.523 21:46:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:19:02.523 21:46:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.523 21:46:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:02.523 [2024-12-10 21:46:03.282335] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:02.523 [2024-12-10 21:46:03.282466] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:19:02.523 21:46:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.523 21:46:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:19:02.523 21:46:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.523 21:46:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:02.523 [2024-12-10 21:46:03.294352] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:02.523 [2024-12-10 21:46:03.296308] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:02.523 [2024-12-10 21:46:03.296351] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:02.523 21:46:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.523 21:46:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:19:02.523 21:46:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:19:02.523 21:46:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:19:02.523 21:46:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:02.523 21:46:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:02.523 21:46:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:02.523 21:46:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:02.523 21:46:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:02.523 21:46:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:02.523 21:46:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:02.523 21:46:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:02.523 21:46:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:02.782 21:46:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:02.782 21:46:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:02.782 21:46:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.782 21:46:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:02.782 21:46:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.782 21:46:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:02.782 "name": "Existed_Raid", 00:19:02.782 "uuid": "4be8f9d6-d8c1-4320-8c93-c45d0e9099f8", 00:19:02.782 "strip_size_kb": 0, 00:19:02.782 "state": "configuring", 00:19:02.782 "raid_level": "raid1", 00:19:02.782 "superblock": true, 00:19:02.782 "num_base_bdevs": 2, 00:19:02.782 "num_base_bdevs_discovered": 1, 00:19:02.782 "num_base_bdevs_operational": 2, 00:19:02.782 "base_bdevs_list": [ 00:19:02.782 { 00:19:02.782 "name": "BaseBdev1", 00:19:02.782 "uuid": "26922b31-abba-46fc-b574-f393b536aef8", 00:19:02.782 "is_configured": true, 00:19:02.782 "data_offset": 256, 00:19:02.782 "data_size": 7936 00:19:02.782 }, 00:19:02.782 { 00:19:02.782 "name": "BaseBdev2", 00:19:02.782 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:02.782 "is_configured": false, 00:19:02.782 "data_offset": 0, 00:19:02.782 "data_size": 0 00:19:02.782 } 00:19:02.782 ] 00:19:02.782 }' 00:19:02.782 21:46:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:02.782 21:46:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:03.044 21:46:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2 00:19:03.044 21:46:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.044 21:46:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:03.044 [2024-12-10 21:46:03.783457] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:03.044 [2024-12-10 21:46:03.783830] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:19:03.044 [2024-12-10 21:46:03.783886] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:19:03.044 [2024-12-10 21:46:03.784170] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:19:03.044 [2024-12-10 21:46:03.784414] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:19:03.044 [2024-12-10 21:46:03.784482] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:19:03.044 BaseBdev2 00:19:03.044 [2024-12-10 21:46:03.784685] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:03.044 21:46:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.044 21:46:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:19:03.044 21:46:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:19:03.044 21:46:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:03.044 21:46:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@905 -- # local i 00:19:03.044 21:46:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:03.044 21:46:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:03.044 21:46:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:19:03.044 21:46:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.044 21:46:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:03.044 21:46:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.044 21:46:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:03.044 21:46:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.044 21:46:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:03.044 [ 00:19:03.044 { 00:19:03.044 "name": "BaseBdev2", 00:19:03.044 "aliases": [ 00:19:03.044 "d6d67103-1862-46af-818c-57c58f0969e1" 00:19:03.044 ], 00:19:03.044 "product_name": "Malloc disk", 00:19:03.044 "block_size": 4096, 00:19:03.044 "num_blocks": 8192, 00:19:03.044 "uuid": "d6d67103-1862-46af-818c-57c58f0969e1", 00:19:03.044 "assigned_rate_limits": { 00:19:03.044 "rw_ios_per_sec": 0, 00:19:03.044 "rw_mbytes_per_sec": 0, 00:19:03.044 "r_mbytes_per_sec": 0, 00:19:03.044 "w_mbytes_per_sec": 0 00:19:03.044 }, 00:19:03.044 "claimed": true, 00:19:03.044 "claim_type": "exclusive_write", 00:19:03.044 "zoned": false, 00:19:03.044 "supported_io_types": { 00:19:03.044 "read": true, 00:19:03.044 "write": true, 00:19:03.044 "unmap": true, 00:19:03.044 "flush": true, 00:19:03.044 "reset": true, 00:19:03.044 "nvme_admin": false, 00:19:03.044 "nvme_io": false, 00:19:03.044 "nvme_io_md": false, 00:19:03.044 "write_zeroes": true, 00:19:03.044 "zcopy": true, 00:19:03.044 "get_zone_info": false, 00:19:03.044 "zone_management": false, 00:19:03.044 "zone_append": false, 00:19:03.044 "compare": false, 00:19:03.044 "compare_and_write": false, 00:19:03.044 "abort": true, 00:19:03.044 "seek_hole": false, 00:19:03.044 "seek_data": false, 00:19:03.044 "copy": true, 00:19:03.044 "nvme_iov_md": false 00:19:03.044 }, 00:19:03.044 "memory_domains": [ 00:19:03.044 { 00:19:03.044 "dma_device_id": "system", 00:19:03.044 "dma_device_type": 1 00:19:03.044 }, 00:19:03.044 { 00:19:03.044 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:03.044 "dma_device_type": 2 00:19:03.044 } 00:19:03.044 ], 00:19:03.044 "driver_specific": {} 00:19:03.044 } 00:19:03.044 ] 00:19:03.044 21:46:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.044 21:46:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@911 -- # return 0 00:19:03.305 21:46:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:19:03.305 21:46:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:19:03.305 21:46:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:19:03.305 21:46:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:03.305 21:46:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:03.305 21:46:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:03.305 21:46:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:03.305 21:46:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:03.306 21:46:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:03.306 21:46:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:03.306 21:46:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:03.306 21:46:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:03.306 21:46:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:03.306 21:46:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.306 21:46:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:03.306 21:46:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:03.306 21:46:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.306 21:46:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:03.306 "name": "Existed_Raid", 00:19:03.306 "uuid": "4be8f9d6-d8c1-4320-8c93-c45d0e9099f8", 00:19:03.306 "strip_size_kb": 0, 00:19:03.306 "state": "online", 00:19:03.306 "raid_level": "raid1", 00:19:03.306 "superblock": true, 00:19:03.306 "num_base_bdevs": 2, 00:19:03.306 "num_base_bdevs_discovered": 2, 00:19:03.306 "num_base_bdevs_operational": 2, 00:19:03.306 "base_bdevs_list": [ 00:19:03.306 { 00:19:03.306 "name": "BaseBdev1", 00:19:03.306 "uuid": "26922b31-abba-46fc-b574-f393b536aef8", 00:19:03.306 "is_configured": true, 00:19:03.306 "data_offset": 256, 00:19:03.306 "data_size": 7936 00:19:03.306 }, 00:19:03.306 { 00:19:03.306 "name": "BaseBdev2", 00:19:03.306 "uuid": "d6d67103-1862-46af-818c-57c58f0969e1", 00:19:03.306 "is_configured": true, 00:19:03.306 "data_offset": 256, 00:19:03.306 "data_size": 7936 00:19:03.306 } 00:19:03.306 ] 00:19:03.306 }' 00:19:03.306 21:46:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:03.306 21:46:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:03.564 21:46:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:19:03.564 21:46:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:19:03.564 21:46:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:03.564 21:46:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:03.565 21:46:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local name 00:19:03.565 21:46:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:03.565 21:46:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:19:03.565 21:46:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.565 21:46:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:03.565 21:46:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:03.565 [2024-12-10 21:46:04.266925] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:03.565 21:46:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.565 21:46:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:03.565 "name": "Existed_Raid", 00:19:03.565 "aliases": [ 00:19:03.565 "4be8f9d6-d8c1-4320-8c93-c45d0e9099f8" 00:19:03.565 ], 00:19:03.565 "product_name": "Raid Volume", 00:19:03.565 "block_size": 4096, 00:19:03.565 "num_blocks": 7936, 00:19:03.565 "uuid": "4be8f9d6-d8c1-4320-8c93-c45d0e9099f8", 00:19:03.565 "assigned_rate_limits": { 00:19:03.565 "rw_ios_per_sec": 0, 00:19:03.565 "rw_mbytes_per_sec": 0, 00:19:03.565 "r_mbytes_per_sec": 0, 00:19:03.565 "w_mbytes_per_sec": 0 00:19:03.565 }, 00:19:03.565 "claimed": false, 00:19:03.565 "zoned": false, 00:19:03.565 "supported_io_types": { 00:19:03.565 "read": true, 00:19:03.565 "write": true, 00:19:03.565 "unmap": false, 00:19:03.565 "flush": false, 00:19:03.565 "reset": true, 00:19:03.565 "nvme_admin": false, 00:19:03.565 "nvme_io": false, 00:19:03.565 "nvme_io_md": false, 00:19:03.565 "write_zeroes": true, 00:19:03.565 "zcopy": false, 00:19:03.565 "get_zone_info": false, 00:19:03.565 "zone_management": false, 00:19:03.565 "zone_append": false, 00:19:03.565 "compare": false, 00:19:03.565 "compare_and_write": false, 00:19:03.565 "abort": false, 00:19:03.565 "seek_hole": false, 00:19:03.565 "seek_data": false, 00:19:03.565 "copy": false, 00:19:03.565 "nvme_iov_md": false 00:19:03.565 }, 00:19:03.565 "memory_domains": [ 00:19:03.565 { 00:19:03.565 "dma_device_id": "system", 00:19:03.565 "dma_device_type": 1 00:19:03.565 }, 00:19:03.565 { 00:19:03.565 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:03.565 "dma_device_type": 2 00:19:03.565 }, 00:19:03.565 { 00:19:03.565 "dma_device_id": "system", 00:19:03.565 "dma_device_type": 1 00:19:03.565 }, 00:19:03.565 { 00:19:03.565 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:03.565 "dma_device_type": 2 00:19:03.565 } 00:19:03.565 ], 00:19:03.565 "driver_specific": { 00:19:03.565 "raid": { 00:19:03.565 "uuid": "4be8f9d6-d8c1-4320-8c93-c45d0e9099f8", 00:19:03.565 "strip_size_kb": 0, 00:19:03.565 "state": "online", 00:19:03.565 "raid_level": "raid1", 00:19:03.565 "superblock": true, 00:19:03.565 "num_base_bdevs": 2, 00:19:03.565 "num_base_bdevs_discovered": 2, 00:19:03.565 "num_base_bdevs_operational": 2, 00:19:03.565 "base_bdevs_list": [ 00:19:03.565 { 00:19:03.565 "name": "BaseBdev1", 00:19:03.565 "uuid": "26922b31-abba-46fc-b574-f393b536aef8", 00:19:03.565 "is_configured": true, 00:19:03.565 "data_offset": 256, 00:19:03.565 "data_size": 7936 00:19:03.565 }, 00:19:03.565 { 00:19:03.565 "name": "BaseBdev2", 00:19:03.565 "uuid": "d6d67103-1862-46af-818c-57c58f0969e1", 00:19:03.565 "is_configured": true, 00:19:03.565 "data_offset": 256, 00:19:03.565 "data_size": 7936 00:19:03.565 } 00:19:03.565 ] 00:19:03.565 } 00:19:03.565 } 00:19:03.565 }' 00:19:03.565 21:46:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:03.565 21:46:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:19:03.565 BaseBdev2' 00:19:03.565 21:46:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:03.825 21:46:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:19:03.825 21:46:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:03.825 21:46:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:03.825 21:46:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:19:03.825 21:46:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.825 21:46:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:03.825 21:46:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.825 21:46:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:19:03.825 21:46:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:19:03.825 21:46:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:03.825 21:46:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:19:03.825 21:46:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.825 21:46:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:03.825 21:46:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:03.825 21:46:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.825 21:46:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:19:03.825 21:46:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:19:03.825 21:46:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:19:03.825 21:46:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.825 21:46:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:03.825 [2024-12-10 21:46:04.430434] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:03.825 21:46:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.825 21:46:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@260 -- # local expected_state 00:19:03.825 21:46:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:19:03.825 21:46:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:19:03.825 21:46:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:19:03.825 21:46:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:19:03.825 21:46:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:19:03.825 21:46:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:03.825 21:46:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:03.825 21:46:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:03.825 21:46:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:03.825 21:46:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:03.825 21:46:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:03.825 21:46:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:03.825 21:46:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:03.825 21:46:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:03.825 21:46:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:03.825 21:46:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:03.825 21:46:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.825 21:46:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:03.825 21:46:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.825 21:46:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:03.825 "name": "Existed_Raid", 00:19:03.825 "uuid": "4be8f9d6-d8c1-4320-8c93-c45d0e9099f8", 00:19:03.825 "strip_size_kb": 0, 00:19:03.825 "state": "online", 00:19:03.825 "raid_level": "raid1", 00:19:03.825 "superblock": true, 00:19:03.825 "num_base_bdevs": 2, 00:19:03.825 "num_base_bdevs_discovered": 1, 00:19:03.825 "num_base_bdevs_operational": 1, 00:19:03.825 "base_bdevs_list": [ 00:19:03.825 { 00:19:03.825 "name": null, 00:19:03.825 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:03.825 "is_configured": false, 00:19:03.825 "data_offset": 0, 00:19:03.825 "data_size": 7936 00:19:03.825 }, 00:19:03.825 { 00:19:03.825 "name": "BaseBdev2", 00:19:03.825 "uuid": "d6d67103-1862-46af-818c-57c58f0969e1", 00:19:03.825 "is_configured": true, 00:19:03.825 "data_offset": 256, 00:19:03.825 "data_size": 7936 00:19:03.825 } 00:19:03.825 ] 00:19:03.825 }' 00:19:03.825 21:46:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:03.825 21:46:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:04.393 21:46:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:19:04.393 21:46:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:04.394 21:46:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:19:04.394 21:46:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:04.394 21:46:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.394 21:46:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:04.394 21:46:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.394 21:46:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:19:04.394 21:46:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:04.394 21:46:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:19:04.394 21:46:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.394 21:46:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:04.394 [2024-12-10 21:46:04.951685] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:04.394 [2024-12-10 21:46:04.951788] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:04.394 [2024-12-10 21:46:05.048222] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:04.394 [2024-12-10 21:46:05.048324] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:04.394 [2024-12-10 21:46:05.048368] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:19:04.394 21:46:05 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.394 21:46:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:19:04.394 21:46:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:04.394 21:46:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:04.394 21:46:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:19:04.394 21:46:05 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.394 21:46:05 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:04.394 21:46:05 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.394 21:46:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:19:04.394 21:46:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:19:04.394 21:46:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:19:04.394 21:46:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@326 -- # killprocess 86099 00:19:04.394 21:46:05 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@954 -- # '[' -z 86099 ']' 00:19:04.394 21:46:05 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@958 -- # kill -0 86099 00:19:04.394 21:46:05 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@959 -- # uname 00:19:04.394 21:46:05 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:04.394 21:46:05 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86099 00:19:04.394 21:46:05 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:04.394 21:46:05 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:04.394 21:46:05 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86099' 00:19:04.394 killing process with pid 86099 00:19:04.394 21:46:05 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@973 -- # kill 86099 00:19:04.394 [2024-12-10 21:46:05.142918] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:04.394 21:46:05 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@978 -- # wait 86099 00:19:04.394 [2024-12-10 21:46:05.159055] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:05.774 21:46:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@328 -- # return 0 00:19:05.774 00:19:05.774 real 0m4.769s 00:19:05.774 user 0m6.850s 00:19:05.774 sys 0m0.774s 00:19:05.774 21:46:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:05.774 21:46:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:05.774 ************************************ 00:19:05.774 END TEST raid_state_function_test_sb_4k 00:19:05.774 ************************************ 00:19:05.774 21:46:06 bdev_raid -- bdev/bdev_raid.sh@998 -- # run_test raid_superblock_test_4k raid_superblock_test raid1 2 00:19:05.774 21:46:06 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:19:05.774 21:46:06 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:05.774 21:46:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:05.774 ************************************ 00:19:05.774 START TEST raid_superblock_test_4k 00:19:05.774 ************************************ 00:19:05.774 21:46:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:19:05.774 21:46:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:19:05.774 21:46:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:19:05.774 21:46:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:19:05.774 21:46:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:19:05.774 21:46:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:19:05.774 21:46:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:19:05.774 21:46:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:19:05.774 21:46:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:19:05.774 21:46:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:19:05.774 21:46:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@399 -- # local strip_size 00:19:05.774 21:46:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:19:05.774 21:46:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:19:05.774 21:46:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:19:05.774 21:46:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:19:05.774 21:46:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:19:05.774 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:05.774 21:46:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@412 -- # raid_pid=86351 00:19:05.774 21:46:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:19:05.774 21:46:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@413 -- # waitforlisten 86351 00:19:05.774 21:46:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@835 -- # '[' -z 86351 ']' 00:19:05.774 21:46:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:05.774 21:46:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:05.774 21:46:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:05.774 21:46:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:05.774 21:46:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:05.774 [2024-12-10 21:46:06.378341] Starting SPDK v25.01-pre git sha1 cec5ba284 / DPDK 24.03.0 initialization... 00:19:05.774 [2024-12-10 21:46:06.378483] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86351 ] 00:19:05.774 [2024-12-10 21:46:06.549567] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:06.034 [2024-12-10 21:46:06.662379] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:19:06.294 [2024-12-10 21:46:06.850447] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:06.294 [2024-12-10 21:46:06.850501] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:06.555 21:46:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:06.555 21:46:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@868 -- # return 0 00:19:06.555 21:46:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:19:06.555 21:46:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:06.555 21:46:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:19:06.555 21:46:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:19:06.555 21:46:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:19:06.555 21:46:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:06.555 21:46:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:19:06.555 21:46:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:06.555 21:46:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc1 00:19:06.555 21:46:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.555 21:46:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:06.555 malloc1 00:19:06.555 21:46:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.555 21:46:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:06.555 21:46:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.555 21:46:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:06.555 [2024-12-10 21:46:07.247513] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:06.555 [2024-12-10 21:46:07.247618] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:06.555 [2024-12-10 21:46:07.247658] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:19:06.555 [2024-12-10 21:46:07.247686] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:06.555 [2024-12-10 21:46:07.249726] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:06.555 [2024-12-10 21:46:07.249801] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:06.555 pt1 00:19:06.555 21:46:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.555 21:46:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:19:06.555 21:46:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:06.555 21:46:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:19:06.555 21:46:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:19:06.555 21:46:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:19:06.555 21:46:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:06.555 21:46:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:19:06.555 21:46:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:06.555 21:46:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc2 00:19:06.555 21:46:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.555 21:46:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:06.555 malloc2 00:19:06.555 21:46:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.555 21:46:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:06.555 21:46:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.555 21:46:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:06.555 [2024-12-10 21:46:07.305356] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:06.555 [2024-12-10 21:46:07.305472] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:06.555 [2024-12-10 21:46:07.305513] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:19:06.555 [2024-12-10 21:46:07.305543] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:06.555 [2024-12-10 21:46:07.307614] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:06.555 [2024-12-10 21:46:07.307684] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:06.555 pt2 00:19:06.555 21:46:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.555 21:46:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:19:06.555 21:46:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:06.555 21:46:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:19:06.555 21:46:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.555 21:46:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:06.555 [2024-12-10 21:46:07.317382] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:06.555 [2024-12-10 21:46:07.319150] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:06.555 [2024-12-10 21:46:07.319358] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:19:06.555 [2024-12-10 21:46:07.319380] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:19:06.555 [2024-12-10 21:46:07.319691] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:19:06.555 [2024-12-10 21:46:07.319849] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:19:06.555 [2024-12-10 21:46:07.319881] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:19:06.555 [2024-12-10 21:46:07.320040] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:06.555 21:46:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.555 21:46:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:06.555 21:46:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:06.555 21:46:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:06.555 21:46:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:06.555 21:46:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:06.555 21:46:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:06.555 21:46:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:06.555 21:46:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:06.555 21:46:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:06.555 21:46:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:06.555 21:46:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:06.555 21:46:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.555 21:46:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:06.555 21:46:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:06.815 21:46:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.815 21:46:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:06.815 "name": "raid_bdev1", 00:19:06.815 "uuid": "80caa882-90d7-4750-a049-77c4b4de74c9", 00:19:06.815 "strip_size_kb": 0, 00:19:06.815 "state": "online", 00:19:06.815 "raid_level": "raid1", 00:19:06.815 "superblock": true, 00:19:06.815 "num_base_bdevs": 2, 00:19:06.815 "num_base_bdevs_discovered": 2, 00:19:06.815 "num_base_bdevs_operational": 2, 00:19:06.815 "base_bdevs_list": [ 00:19:06.815 { 00:19:06.815 "name": "pt1", 00:19:06.815 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:06.815 "is_configured": true, 00:19:06.815 "data_offset": 256, 00:19:06.815 "data_size": 7936 00:19:06.815 }, 00:19:06.815 { 00:19:06.815 "name": "pt2", 00:19:06.815 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:06.815 "is_configured": true, 00:19:06.815 "data_offset": 256, 00:19:06.815 "data_size": 7936 00:19:06.815 } 00:19:06.815 ] 00:19:06.815 }' 00:19:06.815 21:46:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:06.815 21:46:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:07.075 21:46:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:19:07.075 21:46:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:19:07.075 21:46:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:07.075 21:46:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:07.075 21:46:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:19:07.075 21:46:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:07.075 21:46:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:07.075 21:46:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:07.075 21:46:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.075 21:46:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:07.075 [2024-12-10 21:46:07.720934] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:07.075 21:46:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.075 21:46:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:07.075 "name": "raid_bdev1", 00:19:07.075 "aliases": [ 00:19:07.075 "80caa882-90d7-4750-a049-77c4b4de74c9" 00:19:07.075 ], 00:19:07.075 "product_name": "Raid Volume", 00:19:07.075 "block_size": 4096, 00:19:07.075 "num_blocks": 7936, 00:19:07.075 "uuid": "80caa882-90d7-4750-a049-77c4b4de74c9", 00:19:07.075 "assigned_rate_limits": { 00:19:07.075 "rw_ios_per_sec": 0, 00:19:07.075 "rw_mbytes_per_sec": 0, 00:19:07.075 "r_mbytes_per_sec": 0, 00:19:07.075 "w_mbytes_per_sec": 0 00:19:07.075 }, 00:19:07.075 "claimed": false, 00:19:07.075 "zoned": false, 00:19:07.075 "supported_io_types": { 00:19:07.076 "read": true, 00:19:07.076 "write": true, 00:19:07.076 "unmap": false, 00:19:07.076 "flush": false, 00:19:07.076 "reset": true, 00:19:07.076 "nvme_admin": false, 00:19:07.076 "nvme_io": false, 00:19:07.076 "nvme_io_md": false, 00:19:07.076 "write_zeroes": true, 00:19:07.076 "zcopy": false, 00:19:07.076 "get_zone_info": false, 00:19:07.076 "zone_management": false, 00:19:07.076 "zone_append": false, 00:19:07.076 "compare": false, 00:19:07.076 "compare_and_write": false, 00:19:07.076 "abort": false, 00:19:07.076 "seek_hole": false, 00:19:07.076 "seek_data": false, 00:19:07.076 "copy": false, 00:19:07.076 "nvme_iov_md": false 00:19:07.076 }, 00:19:07.076 "memory_domains": [ 00:19:07.076 { 00:19:07.076 "dma_device_id": "system", 00:19:07.076 "dma_device_type": 1 00:19:07.076 }, 00:19:07.076 { 00:19:07.076 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:07.076 "dma_device_type": 2 00:19:07.076 }, 00:19:07.076 { 00:19:07.076 "dma_device_id": "system", 00:19:07.076 "dma_device_type": 1 00:19:07.076 }, 00:19:07.076 { 00:19:07.076 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:07.076 "dma_device_type": 2 00:19:07.076 } 00:19:07.076 ], 00:19:07.076 "driver_specific": { 00:19:07.076 "raid": { 00:19:07.076 "uuid": "80caa882-90d7-4750-a049-77c4b4de74c9", 00:19:07.076 "strip_size_kb": 0, 00:19:07.076 "state": "online", 00:19:07.076 "raid_level": "raid1", 00:19:07.076 "superblock": true, 00:19:07.076 "num_base_bdevs": 2, 00:19:07.076 "num_base_bdevs_discovered": 2, 00:19:07.076 "num_base_bdevs_operational": 2, 00:19:07.076 "base_bdevs_list": [ 00:19:07.076 { 00:19:07.076 "name": "pt1", 00:19:07.076 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:07.076 "is_configured": true, 00:19:07.076 "data_offset": 256, 00:19:07.076 "data_size": 7936 00:19:07.076 }, 00:19:07.076 { 00:19:07.076 "name": "pt2", 00:19:07.076 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:07.076 "is_configured": true, 00:19:07.076 "data_offset": 256, 00:19:07.076 "data_size": 7936 00:19:07.076 } 00:19:07.076 ] 00:19:07.076 } 00:19:07.076 } 00:19:07.076 }' 00:19:07.076 21:46:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:07.076 21:46:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:19:07.076 pt2' 00:19:07.076 21:46:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:07.076 21:46:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:19:07.076 21:46:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:07.076 21:46:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:07.076 21:46:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:19:07.076 21:46:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.076 21:46:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:07.336 21:46:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.336 21:46:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:19:07.336 21:46:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:19:07.336 21:46:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:07.336 21:46:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:19:07.336 21:46:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.336 21:46:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:07.336 21:46:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:07.336 21:46:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.336 21:46:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:19:07.336 21:46:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:19:07.336 21:46:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:07.336 21:46:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.336 21:46:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:07.336 21:46:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:19:07.336 [2024-12-10 21:46:07.920572] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:07.336 21:46:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.336 21:46:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=80caa882-90d7-4750-a049-77c4b4de74c9 00:19:07.336 21:46:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@436 -- # '[' -z 80caa882-90d7-4750-a049-77c4b4de74c9 ']' 00:19:07.336 21:46:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:07.336 21:46:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.336 21:46:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:07.336 [2024-12-10 21:46:07.968205] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:07.336 [2024-12-10 21:46:07.968266] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:07.336 [2024-12-10 21:46:07.968381] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:07.337 [2024-12-10 21:46:07.968468] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:07.337 [2024-12-10 21:46:07.968517] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:19:07.337 21:46:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.337 21:46:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:07.337 21:46:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.337 21:46:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:07.337 21:46:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:19:07.337 21:46:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.337 21:46:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:19:07.337 21:46:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:19:07.337 21:46:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:19:07.337 21:46:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:19:07.337 21:46:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.337 21:46:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:07.337 21:46:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.337 21:46:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:19:07.337 21:46:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:19:07.337 21:46:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.337 21:46:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:07.337 21:46:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.337 21:46:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:19:07.337 21:46:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:19:07.337 21:46:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.337 21:46:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:07.337 21:46:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.337 21:46:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:19:07.337 21:46:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:19:07.337 21:46:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@652 -- # local es=0 00:19:07.337 21:46:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:19:07.337 21:46:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:19:07.337 21:46:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:07.337 21:46:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:19:07.337 21:46:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:07.337 21:46:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:19:07.337 21:46:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.337 21:46:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:07.337 [2024-12-10 21:46:08.104241] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:19:07.337 [2024-12-10 21:46:08.106102] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:19:07.337 [2024-12-10 21:46:08.106217] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:19:07.337 [2024-12-10 21:46:08.106273] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:19:07.337 [2024-12-10 21:46:08.106304] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:07.337 [2024-12-10 21:46:08.106315] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:19:07.337 request: 00:19:07.337 { 00:19:07.337 "name": "raid_bdev1", 00:19:07.337 "raid_level": "raid1", 00:19:07.337 "base_bdevs": [ 00:19:07.337 "malloc1", 00:19:07.337 "malloc2" 00:19:07.337 ], 00:19:07.337 "superblock": false, 00:19:07.337 "method": "bdev_raid_create", 00:19:07.337 "req_id": 1 00:19:07.337 } 00:19:07.337 Got JSON-RPC error response 00:19:07.337 response: 00:19:07.337 { 00:19:07.337 "code": -17, 00:19:07.337 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:19:07.337 } 00:19:07.337 21:46:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:19:07.337 21:46:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@655 -- # es=1 00:19:07.337 21:46:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:07.337 21:46:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:07.337 21:46:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:07.337 21:46:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:07.337 21:46:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.337 21:46:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:07.337 21:46:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:19:07.598 21:46:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.598 21:46:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:19:07.598 21:46:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:19:07.598 21:46:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:07.598 21:46:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.598 21:46:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:07.598 [2024-12-10 21:46:08.152128] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:07.598 [2024-12-10 21:46:08.152179] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:07.598 [2024-12-10 21:46:08.152194] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:19:07.598 [2024-12-10 21:46:08.152205] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:07.598 [2024-12-10 21:46:08.154338] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:07.598 [2024-12-10 21:46:08.154378] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:07.598 [2024-12-10 21:46:08.154464] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:19:07.598 [2024-12-10 21:46:08.154519] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:07.598 pt1 00:19:07.598 21:46:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.598 21:46:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:19:07.598 21:46:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:07.598 21:46:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:07.598 21:46:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:07.598 21:46:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:07.598 21:46:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:07.598 21:46:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:07.598 21:46:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:07.598 21:46:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:07.598 21:46:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:07.598 21:46:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:07.598 21:46:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:07.598 21:46:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.598 21:46:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:07.598 21:46:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.598 21:46:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:07.598 "name": "raid_bdev1", 00:19:07.598 "uuid": "80caa882-90d7-4750-a049-77c4b4de74c9", 00:19:07.598 "strip_size_kb": 0, 00:19:07.598 "state": "configuring", 00:19:07.598 "raid_level": "raid1", 00:19:07.598 "superblock": true, 00:19:07.598 "num_base_bdevs": 2, 00:19:07.598 "num_base_bdevs_discovered": 1, 00:19:07.598 "num_base_bdevs_operational": 2, 00:19:07.598 "base_bdevs_list": [ 00:19:07.598 { 00:19:07.598 "name": "pt1", 00:19:07.598 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:07.598 "is_configured": true, 00:19:07.598 "data_offset": 256, 00:19:07.598 "data_size": 7936 00:19:07.598 }, 00:19:07.598 { 00:19:07.598 "name": null, 00:19:07.598 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:07.598 "is_configured": false, 00:19:07.598 "data_offset": 256, 00:19:07.598 "data_size": 7936 00:19:07.598 } 00:19:07.598 ] 00:19:07.598 }' 00:19:07.598 21:46:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:07.598 21:46:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:07.858 21:46:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:19:07.858 21:46:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:19:07.858 21:46:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:19:07.858 21:46:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:07.858 21:46:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.858 21:46:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:07.858 [2024-12-10 21:46:08.599409] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:07.858 [2024-12-10 21:46:08.599535] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:07.858 [2024-12-10 21:46:08.599575] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:19:07.858 [2024-12-10 21:46:08.599604] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:07.858 [2024-12-10 21:46:08.600057] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:07.858 [2024-12-10 21:46:08.600126] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:07.858 [2024-12-10 21:46:08.600238] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:19:07.858 [2024-12-10 21:46:08.600293] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:07.858 [2024-12-10 21:46:08.600460] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:19:07.858 [2024-12-10 21:46:08.600502] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:19:07.858 [2024-12-10 21:46:08.600761] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:19:07.858 [2024-12-10 21:46:08.600944] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:19:07.858 [2024-12-10 21:46:08.600981] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:19:07.858 [2024-12-10 21:46:08.601165] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:07.858 pt2 00:19:07.858 21:46:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.858 21:46:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:19:07.858 21:46:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:19:07.858 21:46:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:07.858 21:46:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:07.858 21:46:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:07.858 21:46:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:07.858 21:46:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:07.858 21:46:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:07.858 21:46:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:07.858 21:46:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:07.858 21:46:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:07.858 21:46:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:07.858 21:46:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:07.858 21:46:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.858 21:46:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:07.858 21:46:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:07.858 21:46:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.118 21:46:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:08.118 "name": "raid_bdev1", 00:19:08.118 "uuid": "80caa882-90d7-4750-a049-77c4b4de74c9", 00:19:08.118 "strip_size_kb": 0, 00:19:08.118 "state": "online", 00:19:08.118 "raid_level": "raid1", 00:19:08.118 "superblock": true, 00:19:08.118 "num_base_bdevs": 2, 00:19:08.118 "num_base_bdevs_discovered": 2, 00:19:08.118 "num_base_bdevs_operational": 2, 00:19:08.118 "base_bdevs_list": [ 00:19:08.118 { 00:19:08.118 "name": "pt1", 00:19:08.118 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:08.118 "is_configured": true, 00:19:08.118 "data_offset": 256, 00:19:08.118 "data_size": 7936 00:19:08.118 }, 00:19:08.118 { 00:19:08.118 "name": "pt2", 00:19:08.118 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:08.118 "is_configured": true, 00:19:08.118 "data_offset": 256, 00:19:08.118 "data_size": 7936 00:19:08.118 } 00:19:08.118 ] 00:19:08.118 }' 00:19:08.118 21:46:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:08.118 21:46:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:08.378 21:46:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:19:08.378 21:46:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:19:08.378 21:46:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:08.378 21:46:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:08.378 21:46:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:19:08.378 21:46:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:08.378 21:46:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:08.378 21:46:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:08.378 21:46:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.378 21:46:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:08.378 [2024-12-10 21:46:08.982943] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:08.379 21:46:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.379 21:46:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:08.379 "name": "raid_bdev1", 00:19:08.379 "aliases": [ 00:19:08.379 "80caa882-90d7-4750-a049-77c4b4de74c9" 00:19:08.379 ], 00:19:08.379 "product_name": "Raid Volume", 00:19:08.379 "block_size": 4096, 00:19:08.379 "num_blocks": 7936, 00:19:08.379 "uuid": "80caa882-90d7-4750-a049-77c4b4de74c9", 00:19:08.379 "assigned_rate_limits": { 00:19:08.379 "rw_ios_per_sec": 0, 00:19:08.379 "rw_mbytes_per_sec": 0, 00:19:08.379 "r_mbytes_per_sec": 0, 00:19:08.379 "w_mbytes_per_sec": 0 00:19:08.379 }, 00:19:08.379 "claimed": false, 00:19:08.379 "zoned": false, 00:19:08.379 "supported_io_types": { 00:19:08.379 "read": true, 00:19:08.379 "write": true, 00:19:08.379 "unmap": false, 00:19:08.379 "flush": false, 00:19:08.379 "reset": true, 00:19:08.379 "nvme_admin": false, 00:19:08.379 "nvme_io": false, 00:19:08.379 "nvme_io_md": false, 00:19:08.379 "write_zeroes": true, 00:19:08.379 "zcopy": false, 00:19:08.379 "get_zone_info": false, 00:19:08.379 "zone_management": false, 00:19:08.379 "zone_append": false, 00:19:08.379 "compare": false, 00:19:08.379 "compare_and_write": false, 00:19:08.379 "abort": false, 00:19:08.379 "seek_hole": false, 00:19:08.379 "seek_data": false, 00:19:08.379 "copy": false, 00:19:08.379 "nvme_iov_md": false 00:19:08.379 }, 00:19:08.379 "memory_domains": [ 00:19:08.379 { 00:19:08.379 "dma_device_id": "system", 00:19:08.379 "dma_device_type": 1 00:19:08.379 }, 00:19:08.379 { 00:19:08.379 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:08.379 "dma_device_type": 2 00:19:08.379 }, 00:19:08.379 { 00:19:08.379 "dma_device_id": "system", 00:19:08.379 "dma_device_type": 1 00:19:08.379 }, 00:19:08.379 { 00:19:08.379 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:08.379 "dma_device_type": 2 00:19:08.379 } 00:19:08.379 ], 00:19:08.379 "driver_specific": { 00:19:08.379 "raid": { 00:19:08.379 "uuid": "80caa882-90d7-4750-a049-77c4b4de74c9", 00:19:08.379 "strip_size_kb": 0, 00:19:08.379 "state": "online", 00:19:08.379 "raid_level": "raid1", 00:19:08.379 "superblock": true, 00:19:08.379 "num_base_bdevs": 2, 00:19:08.379 "num_base_bdevs_discovered": 2, 00:19:08.379 "num_base_bdevs_operational": 2, 00:19:08.379 "base_bdevs_list": [ 00:19:08.379 { 00:19:08.379 "name": "pt1", 00:19:08.379 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:08.379 "is_configured": true, 00:19:08.379 "data_offset": 256, 00:19:08.379 "data_size": 7936 00:19:08.379 }, 00:19:08.379 { 00:19:08.379 "name": "pt2", 00:19:08.379 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:08.379 "is_configured": true, 00:19:08.379 "data_offset": 256, 00:19:08.379 "data_size": 7936 00:19:08.379 } 00:19:08.379 ] 00:19:08.379 } 00:19:08.379 } 00:19:08.379 }' 00:19:08.379 21:46:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:08.379 21:46:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:19:08.379 pt2' 00:19:08.379 21:46:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:08.379 21:46:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:19:08.379 21:46:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:08.379 21:46:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:19:08.379 21:46:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.379 21:46:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:08.379 21:46:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:08.379 21:46:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.379 21:46:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:19:08.379 21:46:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:19:08.379 21:46:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:08.639 21:46:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:19:08.639 21:46:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:08.639 21:46:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.639 21:46:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:08.639 21:46:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.639 21:46:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:19:08.639 21:46:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:19:08.639 21:46:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:19:08.639 21:46:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:08.639 21:46:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.639 21:46:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:08.639 [2024-12-10 21:46:09.214562] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:08.639 21:46:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.639 21:46:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # '[' 80caa882-90d7-4750-a049-77c4b4de74c9 '!=' 80caa882-90d7-4750-a049-77c4b4de74c9 ']' 00:19:08.639 21:46:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:19:08.639 21:46:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:19:08.639 21:46:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:19:08.639 21:46:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:19:08.639 21:46:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.639 21:46:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:08.639 [2024-12-10 21:46:09.258270] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:19:08.639 21:46:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.639 21:46:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:08.639 21:46:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:08.639 21:46:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:08.639 21:46:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:08.639 21:46:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:08.639 21:46:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:08.639 21:46:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:08.639 21:46:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:08.639 21:46:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:08.639 21:46:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:08.639 21:46:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:08.639 21:46:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.639 21:46:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:08.639 21:46:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:08.639 21:46:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.639 21:46:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:08.639 "name": "raid_bdev1", 00:19:08.639 "uuid": "80caa882-90d7-4750-a049-77c4b4de74c9", 00:19:08.639 "strip_size_kb": 0, 00:19:08.639 "state": "online", 00:19:08.639 "raid_level": "raid1", 00:19:08.639 "superblock": true, 00:19:08.639 "num_base_bdevs": 2, 00:19:08.639 "num_base_bdevs_discovered": 1, 00:19:08.639 "num_base_bdevs_operational": 1, 00:19:08.639 "base_bdevs_list": [ 00:19:08.639 { 00:19:08.639 "name": null, 00:19:08.639 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:08.639 "is_configured": false, 00:19:08.639 "data_offset": 0, 00:19:08.639 "data_size": 7936 00:19:08.639 }, 00:19:08.639 { 00:19:08.640 "name": "pt2", 00:19:08.640 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:08.640 "is_configured": true, 00:19:08.640 "data_offset": 256, 00:19:08.640 "data_size": 7936 00:19:08.640 } 00:19:08.640 ] 00:19:08.640 }' 00:19:08.640 21:46:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:08.640 21:46:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:09.209 21:46:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:09.209 21:46:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.209 21:46:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:09.209 [2024-12-10 21:46:09.749452] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:09.209 [2024-12-10 21:46:09.749521] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:09.209 [2024-12-10 21:46:09.749616] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:09.209 [2024-12-10 21:46:09.749679] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:09.209 [2024-12-10 21:46:09.749727] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:19:09.209 21:46:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.209 21:46:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:19:09.209 21:46:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:09.209 21:46:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.209 21:46:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:09.209 21:46:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.209 21:46:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:19:09.209 21:46:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:19:09.209 21:46:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:19:09.209 21:46:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:19:09.209 21:46:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:19:09.209 21:46:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.209 21:46:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:09.209 21:46:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.209 21:46:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:19:09.209 21:46:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:19:09.209 21:46:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:19:09.209 21:46:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:19:09.209 21:46:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@519 -- # i=1 00:19:09.210 21:46:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:09.210 21:46:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.210 21:46:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:09.210 [2024-12-10 21:46:09.805294] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:09.210 [2024-12-10 21:46:09.805393] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:09.210 [2024-12-10 21:46:09.805412] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:19:09.210 [2024-12-10 21:46:09.805431] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:09.210 [2024-12-10 21:46:09.807571] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:09.210 [2024-12-10 21:46:09.807612] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:09.210 [2024-12-10 21:46:09.807689] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:19:09.210 [2024-12-10 21:46:09.807739] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:09.210 [2024-12-10 21:46:09.807844] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:19:09.210 [2024-12-10 21:46:09.807866] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:19:09.210 [2024-12-10 21:46:09.808114] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:19:09.210 [2024-12-10 21:46:09.808302] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:19:09.210 [2024-12-10 21:46:09.808312] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:19:09.210 [2024-12-10 21:46:09.808529] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:09.210 pt2 00:19:09.210 21:46:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.210 21:46:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:09.210 21:46:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:09.210 21:46:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:09.210 21:46:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:09.210 21:46:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:09.210 21:46:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:09.210 21:46:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:09.210 21:46:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:09.210 21:46:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:09.210 21:46:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:09.210 21:46:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:09.210 21:46:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.210 21:46:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:09.210 21:46:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:09.210 21:46:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.210 21:46:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:09.210 "name": "raid_bdev1", 00:19:09.210 "uuid": "80caa882-90d7-4750-a049-77c4b4de74c9", 00:19:09.210 "strip_size_kb": 0, 00:19:09.210 "state": "online", 00:19:09.210 "raid_level": "raid1", 00:19:09.210 "superblock": true, 00:19:09.210 "num_base_bdevs": 2, 00:19:09.210 "num_base_bdevs_discovered": 1, 00:19:09.210 "num_base_bdevs_operational": 1, 00:19:09.210 "base_bdevs_list": [ 00:19:09.210 { 00:19:09.210 "name": null, 00:19:09.210 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:09.210 "is_configured": false, 00:19:09.210 "data_offset": 256, 00:19:09.210 "data_size": 7936 00:19:09.210 }, 00:19:09.210 { 00:19:09.210 "name": "pt2", 00:19:09.210 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:09.210 "is_configured": true, 00:19:09.210 "data_offset": 256, 00:19:09.210 "data_size": 7936 00:19:09.210 } 00:19:09.210 ] 00:19:09.210 }' 00:19:09.210 21:46:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:09.210 21:46:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:09.469 21:46:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:09.470 21:46:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.470 21:46:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:09.470 [2024-12-10 21:46:10.216577] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:09.470 [2024-12-10 21:46:10.216651] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:09.470 [2024-12-10 21:46:10.216738] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:09.470 [2024-12-10 21:46:10.216813] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:09.470 [2024-12-10 21:46:10.216855] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:19:09.470 21:46:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.470 21:46:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:09.470 21:46:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:19:09.470 21:46:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.470 21:46:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:09.470 21:46:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.730 21:46:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:19:09.730 21:46:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:19:09.730 21:46:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:19:09.730 21:46:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:09.730 21:46:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.730 21:46:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:09.730 [2024-12-10 21:46:10.276501] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:09.730 [2024-12-10 21:46:10.276552] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:09.730 [2024-12-10 21:46:10.276575] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:19:09.730 [2024-12-10 21:46:10.276585] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:09.730 [2024-12-10 21:46:10.278635] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:09.730 [2024-12-10 21:46:10.278670] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:09.730 [2024-12-10 21:46:10.278745] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:19:09.730 [2024-12-10 21:46:10.278800] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:09.730 [2024-12-10 21:46:10.278939] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:19:09.730 [2024-12-10 21:46:10.278950] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:09.730 [2024-12-10 21:46:10.278965] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:19:09.730 [2024-12-10 21:46:10.279018] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:09.730 [2024-12-10 21:46:10.279102] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:19:09.730 [2024-12-10 21:46:10.279115] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:19:09.730 [2024-12-10 21:46:10.279354] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:19:09.730 [2024-12-10 21:46:10.279555] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:19:09.730 [2024-12-10 21:46:10.279598] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:19:09.730 [2024-12-10 21:46:10.279774] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:09.730 pt1 00:19:09.730 21:46:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.730 21:46:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:19:09.730 21:46:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:09.730 21:46:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:09.730 21:46:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:09.730 21:46:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:09.730 21:46:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:09.730 21:46:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:09.730 21:46:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:09.730 21:46:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:09.730 21:46:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:09.730 21:46:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:09.730 21:46:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:09.730 21:46:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:09.730 21:46:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.730 21:46:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:09.730 21:46:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.730 21:46:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:09.730 "name": "raid_bdev1", 00:19:09.730 "uuid": "80caa882-90d7-4750-a049-77c4b4de74c9", 00:19:09.730 "strip_size_kb": 0, 00:19:09.730 "state": "online", 00:19:09.730 "raid_level": "raid1", 00:19:09.730 "superblock": true, 00:19:09.730 "num_base_bdevs": 2, 00:19:09.730 "num_base_bdevs_discovered": 1, 00:19:09.730 "num_base_bdevs_operational": 1, 00:19:09.730 "base_bdevs_list": [ 00:19:09.730 { 00:19:09.730 "name": null, 00:19:09.730 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:09.730 "is_configured": false, 00:19:09.730 "data_offset": 256, 00:19:09.730 "data_size": 7936 00:19:09.730 }, 00:19:09.730 { 00:19:09.730 "name": "pt2", 00:19:09.730 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:09.730 "is_configured": true, 00:19:09.730 "data_offset": 256, 00:19:09.730 "data_size": 7936 00:19:09.730 } 00:19:09.730 ] 00:19:09.730 }' 00:19:09.730 21:46:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:09.730 21:46:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:09.989 21:46:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:19:09.989 21:46:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.989 21:46:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:09.989 21:46:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:19:09.989 21:46:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.989 21:46:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:19:09.989 21:46:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:09.989 21:46:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:19:09.989 21:46:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.989 21:46:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:09.989 [2024-12-10 21:46:10.727978] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:09.989 21:46:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.989 21:46:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # '[' 80caa882-90d7-4750-a049-77c4b4de74c9 '!=' 80caa882-90d7-4750-a049-77c4b4de74c9 ']' 00:19:09.989 21:46:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@563 -- # killprocess 86351 00:19:09.989 21:46:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@954 -- # '[' -z 86351 ']' 00:19:09.989 21:46:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@958 -- # kill -0 86351 00:19:09.989 21:46:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@959 -- # uname 00:19:10.248 21:46:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:10.248 21:46:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86351 00:19:10.248 21:46:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:10.248 killing process with pid 86351 00:19:10.248 21:46:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:10.248 21:46:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86351' 00:19:10.248 21:46:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@973 -- # kill 86351 00:19:10.248 [2024-12-10 21:46:10.806160] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:10.248 [2024-12-10 21:46:10.806245] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:10.248 [2024-12-10 21:46:10.806293] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:10.248 [2024-12-10 21:46:10.806306] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:19:10.248 21:46:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@978 -- # wait 86351 00:19:10.248 [2024-12-10 21:46:11.008708] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:11.629 21:46:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@565 -- # return 0 00:19:11.629 00:19:11.629 real 0m5.795s 00:19:11.629 user 0m8.726s 00:19:11.629 sys 0m0.987s 00:19:11.629 21:46:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:11.629 ************************************ 00:19:11.629 END TEST raid_superblock_test_4k 00:19:11.629 ************************************ 00:19:11.629 21:46:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:11.629 21:46:12 bdev_raid -- bdev/bdev_raid.sh@999 -- # '[' true = true ']' 00:19:11.629 21:46:12 bdev_raid -- bdev/bdev_raid.sh@1000 -- # run_test raid_rebuild_test_sb_4k raid_rebuild_test raid1 2 true false true 00:19:11.629 21:46:12 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:19:11.629 21:46:12 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:11.629 21:46:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:11.629 ************************************ 00:19:11.629 START TEST raid_rebuild_test_sb_4k 00:19:11.629 ************************************ 00:19:11.629 21:46:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:19:11.629 21:46:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:19:11.629 21:46:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:19:11.629 21:46:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:19:11.629 21:46:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:19:11.629 21:46:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # local verify=true 00:19:11.629 21:46:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:19:11.629 21:46:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:11.629 21:46:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:19:11.629 21:46:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:11.629 21:46:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:11.629 21:46:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:19:11.629 21:46:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:11.629 21:46:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:11.629 21:46:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:19:11.629 21:46:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:19:11.629 21:46:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:19:11.629 21:46:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # local strip_size 00:19:11.629 21:46:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@577 -- # local create_arg 00:19:11.629 21:46:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:19:11.629 21:46:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@579 -- # local data_offset 00:19:11.629 21:46:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:19:11.629 21:46:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:19:11.629 21:46:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:19:11.629 21:46:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:19:11.629 21:46:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@597 -- # raid_pid=86668 00:19:11.629 21:46:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:19:11.629 21:46:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@598 -- # waitforlisten 86668 00:19:11.629 21:46:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@835 -- # '[' -z 86668 ']' 00:19:11.629 21:46:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:11.629 21:46:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:11.629 21:46:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:11.629 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:11.629 21:46:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:11.629 21:46:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:11.629 [2024-12-10 21:46:12.256536] Starting SPDK v25.01-pre git sha1 cec5ba284 / DPDK 24.03.0 initialization... 00:19:11.629 [2024-12-10 21:46:12.256744] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:19:11.629 Zero copy mechanism will not be used. 00:19:11.629 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86668 ] 00:19:11.889 [2024-12-10 21:46:12.413095] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:11.889 [2024-12-10 21:46:12.517360] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:19:12.148 [2024-12-10 21:46:12.707089] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:12.148 [2024-12-10 21:46:12.707220] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:12.408 21:46:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:12.408 21:46:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # return 0 00:19:12.408 21:46:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:12.408 21:46:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1_malloc 00:19:12.408 21:46:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.408 21:46:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:12.408 BaseBdev1_malloc 00:19:12.408 21:46:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.408 21:46:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:19:12.408 21:46:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.408 21:46:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:12.408 [2024-12-10 21:46:13.120854] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:19:12.408 [2024-12-10 21:46:13.120917] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:12.408 [2024-12-10 21:46:13.120938] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:19:12.408 [2024-12-10 21:46:13.120949] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:12.408 [2024-12-10 21:46:13.122960] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:12.408 [2024-12-10 21:46:13.123001] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:12.408 BaseBdev1 00:19:12.408 21:46:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.408 21:46:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:12.408 21:46:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2_malloc 00:19:12.408 21:46:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.408 21:46:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:12.408 BaseBdev2_malloc 00:19:12.408 21:46:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.408 21:46:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:19:12.408 21:46:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.408 21:46:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:12.408 [2024-12-10 21:46:13.173789] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:19:12.408 [2024-12-10 21:46:13.173842] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:12.408 [2024-12-10 21:46:13.173860] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:19:12.408 [2024-12-10 21:46:13.173871] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:12.408 [2024-12-10 21:46:13.175837] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:12.408 [2024-12-10 21:46:13.175928] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:19:12.408 BaseBdev2 00:19:12.408 21:46:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.408 21:46:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -b spare_malloc 00:19:12.408 21:46:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.408 21:46:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:12.668 spare_malloc 00:19:12.668 21:46:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.668 21:46:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:19:12.668 21:46:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.668 21:46:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:12.668 spare_delay 00:19:12.668 21:46:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.668 21:46:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:12.668 21:46:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.668 21:46:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:12.668 [2024-12-10 21:46:13.271452] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:12.668 [2024-12-10 21:46:13.271505] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:12.668 [2024-12-10 21:46:13.271524] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:19:12.668 [2024-12-10 21:46:13.271534] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:12.668 [2024-12-10 21:46:13.273600] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:12.668 [2024-12-10 21:46:13.273638] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:12.668 spare 00:19:12.668 21:46:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.668 21:46:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:19:12.668 21:46:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.668 21:46:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:12.668 [2024-12-10 21:46:13.283492] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:12.668 [2024-12-10 21:46:13.285240] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:12.669 [2024-12-10 21:46:13.285432] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:19:12.669 [2024-12-10 21:46:13.285449] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:19:12.669 [2024-12-10 21:46:13.285683] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:19:12.669 [2024-12-10 21:46:13.285854] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:19:12.669 [2024-12-10 21:46:13.285863] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:19:12.669 [2024-12-10 21:46:13.286011] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:12.669 21:46:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.669 21:46:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:12.669 21:46:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:12.669 21:46:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:12.669 21:46:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:12.669 21:46:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:12.669 21:46:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:12.669 21:46:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:12.669 21:46:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:12.669 21:46:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:12.669 21:46:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:12.669 21:46:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:12.669 21:46:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:12.669 21:46:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.669 21:46:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:12.669 21:46:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.669 21:46:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:12.669 "name": "raid_bdev1", 00:19:12.669 "uuid": "a76f2aab-ed40-436b-bad2-fc2f363c9f67", 00:19:12.669 "strip_size_kb": 0, 00:19:12.669 "state": "online", 00:19:12.669 "raid_level": "raid1", 00:19:12.669 "superblock": true, 00:19:12.669 "num_base_bdevs": 2, 00:19:12.669 "num_base_bdevs_discovered": 2, 00:19:12.669 "num_base_bdevs_operational": 2, 00:19:12.669 "base_bdevs_list": [ 00:19:12.669 { 00:19:12.669 "name": "BaseBdev1", 00:19:12.669 "uuid": "dd7a5823-0750-5c5e-9a74-0acd5a9540a6", 00:19:12.669 "is_configured": true, 00:19:12.669 "data_offset": 256, 00:19:12.669 "data_size": 7936 00:19:12.669 }, 00:19:12.669 { 00:19:12.669 "name": "BaseBdev2", 00:19:12.669 "uuid": "06aef473-37f6-5ce2-8989-34c3ffdf9a84", 00:19:12.669 "is_configured": true, 00:19:12.669 "data_offset": 256, 00:19:12.669 "data_size": 7936 00:19:12.669 } 00:19:12.669 ] 00:19:12.669 }' 00:19:12.669 21:46:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:12.669 21:46:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:12.929 21:46:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:12.929 21:46:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.929 21:46:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:12.929 21:46:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:19:12.929 [2024-12-10 21:46:13.698980] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:13.189 21:46:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.189 21:46:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:19:13.189 21:46:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:13.189 21:46:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.189 21:46:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:13.189 21:46:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:19:13.189 21:46:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.189 21:46:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:19:13.189 21:46:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:19:13.189 21:46:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:19:13.189 21:46:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:19:13.189 21:46:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:19:13.189 21:46:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:19:13.189 21:46:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:19:13.189 21:46:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:13.189 21:46:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:19:13.189 21:46:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:13.189 21:46:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:19:13.189 21:46:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:13.189 21:46:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:13.189 21:46:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:19:13.189 [2024-12-10 21:46:13.938406] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:19:13.189 /dev/nbd0 00:19:13.449 21:46:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:13.449 21:46:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:13.449 21:46:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:19:13.449 21:46:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:19:13.449 21:46:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:13.449 21:46:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:13.449 21:46:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:19:13.449 21:46:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:19:13.449 21:46:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:13.449 21:46:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:13.449 21:46:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:13.449 1+0 records in 00:19:13.449 1+0 records out 00:19:13.449 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000457664 s, 8.9 MB/s 00:19:13.449 21:46:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:13.449 21:46:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:19:13.449 21:46:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:13.449 21:46:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:13.449 21:46:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:19:13.449 21:46:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:13.449 21:46:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:13.449 21:46:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:19:13.449 21:46:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:19:13.449 21:46:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:19:14.018 7936+0 records in 00:19:14.018 7936+0 records out 00:19:14.018 32505856 bytes (33 MB, 31 MiB) copied, 0.621345 s, 52.3 MB/s 00:19:14.018 21:46:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:19:14.018 21:46:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:19:14.018 21:46:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:14.018 21:46:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:14.018 21:46:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:19:14.018 21:46:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:14.018 21:46:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:19:14.277 21:46:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:14.277 21:46:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:14.277 [2024-12-10 21:46:14.854519] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:14.277 21:46:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:14.277 21:46:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:14.277 21:46:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:14.277 21:46:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:14.277 21:46:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:19:14.277 21:46:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:19:14.277 21:46:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:19:14.277 21:46:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.277 21:46:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:14.277 [2024-12-10 21:46:14.870588] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:14.277 21:46:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.277 21:46:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:14.277 21:46:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:14.277 21:46:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:14.277 21:46:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:14.277 21:46:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:14.277 21:46:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:14.278 21:46:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:14.278 21:46:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:14.278 21:46:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:14.278 21:46:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:14.278 21:46:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:14.278 21:46:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:14.278 21:46:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.278 21:46:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:14.278 21:46:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.278 21:46:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:14.278 "name": "raid_bdev1", 00:19:14.278 "uuid": "a76f2aab-ed40-436b-bad2-fc2f363c9f67", 00:19:14.278 "strip_size_kb": 0, 00:19:14.278 "state": "online", 00:19:14.278 "raid_level": "raid1", 00:19:14.278 "superblock": true, 00:19:14.278 "num_base_bdevs": 2, 00:19:14.278 "num_base_bdevs_discovered": 1, 00:19:14.278 "num_base_bdevs_operational": 1, 00:19:14.278 "base_bdevs_list": [ 00:19:14.278 { 00:19:14.278 "name": null, 00:19:14.278 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:14.278 "is_configured": false, 00:19:14.278 "data_offset": 0, 00:19:14.278 "data_size": 7936 00:19:14.278 }, 00:19:14.278 { 00:19:14.278 "name": "BaseBdev2", 00:19:14.278 "uuid": "06aef473-37f6-5ce2-8989-34c3ffdf9a84", 00:19:14.278 "is_configured": true, 00:19:14.278 "data_offset": 256, 00:19:14.278 "data_size": 7936 00:19:14.278 } 00:19:14.278 ] 00:19:14.278 }' 00:19:14.278 21:46:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:14.278 21:46:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:14.535 21:46:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:14.535 21:46:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.535 21:46:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:14.535 [2024-12-10 21:46:15.301878] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:14.792 [2024-12-10 21:46:15.317464] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d260 00:19:14.792 21:46:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.792 21:46:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@647 -- # sleep 1 00:19:14.792 [2024-12-10 21:46:15.319371] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:15.728 21:46:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:15.728 21:46:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:15.728 21:46:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:15.728 21:46:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:15.728 21:46:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:15.728 21:46:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:15.728 21:46:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:15.728 21:46:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.728 21:46:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:15.728 21:46:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.728 21:46:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:15.728 "name": "raid_bdev1", 00:19:15.728 "uuid": "a76f2aab-ed40-436b-bad2-fc2f363c9f67", 00:19:15.728 "strip_size_kb": 0, 00:19:15.728 "state": "online", 00:19:15.728 "raid_level": "raid1", 00:19:15.728 "superblock": true, 00:19:15.728 "num_base_bdevs": 2, 00:19:15.728 "num_base_bdevs_discovered": 2, 00:19:15.728 "num_base_bdevs_operational": 2, 00:19:15.728 "process": { 00:19:15.728 "type": "rebuild", 00:19:15.728 "target": "spare", 00:19:15.728 "progress": { 00:19:15.728 "blocks": 2560, 00:19:15.728 "percent": 32 00:19:15.728 } 00:19:15.728 }, 00:19:15.728 "base_bdevs_list": [ 00:19:15.728 { 00:19:15.728 "name": "spare", 00:19:15.728 "uuid": "fb34d5ec-9740-5969-9387-0b9a07468f6e", 00:19:15.728 "is_configured": true, 00:19:15.729 "data_offset": 256, 00:19:15.729 "data_size": 7936 00:19:15.729 }, 00:19:15.729 { 00:19:15.729 "name": "BaseBdev2", 00:19:15.729 "uuid": "06aef473-37f6-5ce2-8989-34c3ffdf9a84", 00:19:15.729 "is_configured": true, 00:19:15.729 "data_offset": 256, 00:19:15.729 "data_size": 7936 00:19:15.729 } 00:19:15.729 ] 00:19:15.729 }' 00:19:15.729 21:46:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:15.729 21:46:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:15.729 21:46:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:15.729 21:46:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:15.729 21:46:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:19:15.729 21:46:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.729 21:46:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:15.729 [2024-12-10 21:46:16.446679] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:15.988 [2024-12-10 21:46:16.524232] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:15.988 [2024-12-10 21:46:16.524308] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:15.988 [2024-12-10 21:46:16.524323] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:15.988 [2024-12-10 21:46:16.524332] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:15.988 21:46:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.988 21:46:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:15.988 21:46:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:15.988 21:46:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:15.988 21:46:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:15.988 21:46:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:15.988 21:46:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:15.988 21:46:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:15.988 21:46:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:15.988 21:46:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:15.988 21:46:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:15.988 21:46:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:15.988 21:46:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.988 21:46:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:15.988 21:46:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:15.988 21:46:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.988 21:46:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:15.988 "name": "raid_bdev1", 00:19:15.988 "uuid": "a76f2aab-ed40-436b-bad2-fc2f363c9f67", 00:19:15.988 "strip_size_kb": 0, 00:19:15.988 "state": "online", 00:19:15.988 "raid_level": "raid1", 00:19:15.988 "superblock": true, 00:19:15.988 "num_base_bdevs": 2, 00:19:15.988 "num_base_bdevs_discovered": 1, 00:19:15.988 "num_base_bdevs_operational": 1, 00:19:15.988 "base_bdevs_list": [ 00:19:15.988 { 00:19:15.988 "name": null, 00:19:15.988 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:15.988 "is_configured": false, 00:19:15.988 "data_offset": 0, 00:19:15.988 "data_size": 7936 00:19:15.988 }, 00:19:15.988 { 00:19:15.988 "name": "BaseBdev2", 00:19:15.988 "uuid": "06aef473-37f6-5ce2-8989-34c3ffdf9a84", 00:19:15.988 "is_configured": true, 00:19:15.988 "data_offset": 256, 00:19:15.988 "data_size": 7936 00:19:15.988 } 00:19:15.988 ] 00:19:15.988 }' 00:19:15.988 21:46:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:15.988 21:46:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:16.247 21:46:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:16.247 21:46:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:16.247 21:46:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:16.247 21:46:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:16.247 21:46:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:16.247 21:46:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:16.247 21:46:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:16.247 21:46:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.247 21:46:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:16.247 21:46:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.247 21:46:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:16.247 "name": "raid_bdev1", 00:19:16.247 "uuid": "a76f2aab-ed40-436b-bad2-fc2f363c9f67", 00:19:16.247 "strip_size_kb": 0, 00:19:16.247 "state": "online", 00:19:16.247 "raid_level": "raid1", 00:19:16.247 "superblock": true, 00:19:16.247 "num_base_bdevs": 2, 00:19:16.247 "num_base_bdevs_discovered": 1, 00:19:16.247 "num_base_bdevs_operational": 1, 00:19:16.247 "base_bdevs_list": [ 00:19:16.247 { 00:19:16.247 "name": null, 00:19:16.247 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:16.247 "is_configured": false, 00:19:16.247 "data_offset": 0, 00:19:16.247 "data_size": 7936 00:19:16.247 }, 00:19:16.247 { 00:19:16.247 "name": "BaseBdev2", 00:19:16.247 "uuid": "06aef473-37f6-5ce2-8989-34c3ffdf9a84", 00:19:16.247 "is_configured": true, 00:19:16.247 "data_offset": 256, 00:19:16.247 "data_size": 7936 00:19:16.247 } 00:19:16.247 ] 00:19:16.247 }' 00:19:16.247 21:46:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:16.533 21:46:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:16.533 21:46:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:16.533 21:46:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:16.533 21:46:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:16.533 21:46:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.533 21:46:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:16.533 [2024-12-10 21:46:17.094438] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:16.533 [2024-12-10 21:46:17.110165] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d330 00:19:16.533 21:46:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.533 21:46:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@663 -- # sleep 1 00:19:16.533 [2024-12-10 21:46:17.112046] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:17.484 21:46:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:17.484 21:46:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:17.484 21:46:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:17.484 21:46:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:17.484 21:46:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:17.484 21:46:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:17.484 21:46:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.484 21:46:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:17.484 21:46:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:17.484 21:46:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.484 21:46:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:17.484 "name": "raid_bdev1", 00:19:17.484 "uuid": "a76f2aab-ed40-436b-bad2-fc2f363c9f67", 00:19:17.484 "strip_size_kb": 0, 00:19:17.484 "state": "online", 00:19:17.484 "raid_level": "raid1", 00:19:17.484 "superblock": true, 00:19:17.484 "num_base_bdevs": 2, 00:19:17.484 "num_base_bdevs_discovered": 2, 00:19:17.484 "num_base_bdevs_operational": 2, 00:19:17.484 "process": { 00:19:17.484 "type": "rebuild", 00:19:17.484 "target": "spare", 00:19:17.484 "progress": { 00:19:17.484 "blocks": 2560, 00:19:17.484 "percent": 32 00:19:17.484 } 00:19:17.484 }, 00:19:17.484 "base_bdevs_list": [ 00:19:17.484 { 00:19:17.484 "name": "spare", 00:19:17.484 "uuid": "fb34d5ec-9740-5969-9387-0b9a07468f6e", 00:19:17.484 "is_configured": true, 00:19:17.484 "data_offset": 256, 00:19:17.484 "data_size": 7936 00:19:17.484 }, 00:19:17.484 { 00:19:17.484 "name": "BaseBdev2", 00:19:17.484 "uuid": "06aef473-37f6-5ce2-8989-34c3ffdf9a84", 00:19:17.484 "is_configured": true, 00:19:17.484 "data_offset": 256, 00:19:17.484 "data_size": 7936 00:19:17.484 } 00:19:17.484 ] 00:19:17.484 }' 00:19:17.484 21:46:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:17.484 21:46:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:17.484 21:46:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:17.484 21:46:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:17.484 21:46:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:19:17.484 21:46:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:19:17.485 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:19:17.485 21:46:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:19:17.485 21:46:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:19:17.485 21:46:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:19:17.485 21:46:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@706 -- # local timeout=690 00:19:17.485 21:46:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:17.485 21:46:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:17.485 21:46:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:17.485 21:46:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:17.485 21:46:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:17.485 21:46:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:17.485 21:46:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:17.485 21:46:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:17.485 21:46:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.485 21:46:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:17.743 21:46:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.743 21:46:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:17.743 "name": "raid_bdev1", 00:19:17.743 "uuid": "a76f2aab-ed40-436b-bad2-fc2f363c9f67", 00:19:17.743 "strip_size_kb": 0, 00:19:17.743 "state": "online", 00:19:17.743 "raid_level": "raid1", 00:19:17.743 "superblock": true, 00:19:17.743 "num_base_bdevs": 2, 00:19:17.743 "num_base_bdevs_discovered": 2, 00:19:17.743 "num_base_bdevs_operational": 2, 00:19:17.743 "process": { 00:19:17.743 "type": "rebuild", 00:19:17.743 "target": "spare", 00:19:17.743 "progress": { 00:19:17.743 "blocks": 2816, 00:19:17.743 "percent": 35 00:19:17.743 } 00:19:17.743 }, 00:19:17.743 "base_bdevs_list": [ 00:19:17.743 { 00:19:17.743 "name": "spare", 00:19:17.743 "uuid": "fb34d5ec-9740-5969-9387-0b9a07468f6e", 00:19:17.743 "is_configured": true, 00:19:17.743 "data_offset": 256, 00:19:17.743 "data_size": 7936 00:19:17.743 }, 00:19:17.743 { 00:19:17.743 "name": "BaseBdev2", 00:19:17.743 "uuid": "06aef473-37f6-5ce2-8989-34c3ffdf9a84", 00:19:17.743 "is_configured": true, 00:19:17.743 "data_offset": 256, 00:19:17.743 "data_size": 7936 00:19:17.743 } 00:19:17.743 ] 00:19:17.743 }' 00:19:17.743 21:46:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:17.743 21:46:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:17.743 21:46:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:17.744 21:46:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:17.744 21:46:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:18.681 21:46:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:18.681 21:46:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:18.681 21:46:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:18.681 21:46:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:18.681 21:46:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:18.681 21:46:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:18.681 21:46:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:18.681 21:46:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:18.681 21:46:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.681 21:46:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:18.681 21:46:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.681 21:46:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:18.681 "name": "raid_bdev1", 00:19:18.681 "uuid": "a76f2aab-ed40-436b-bad2-fc2f363c9f67", 00:19:18.681 "strip_size_kb": 0, 00:19:18.681 "state": "online", 00:19:18.681 "raid_level": "raid1", 00:19:18.681 "superblock": true, 00:19:18.681 "num_base_bdevs": 2, 00:19:18.681 "num_base_bdevs_discovered": 2, 00:19:18.681 "num_base_bdevs_operational": 2, 00:19:18.681 "process": { 00:19:18.681 "type": "rebuild", 00:19:18.681 "target": "spare", 00:19:18.681 "progress": { 00:19:18.681 "blocks": 5632, 00:19:18.681 "percent": 70 00:19:18.681 } 00:19:18.681 }, 00:19:18.681 "base_bdevs_list": [ 00:19:18.681 { 00:19:18.681 "name": "spare", 00:19:18.681 "uuid": "fb34d5ec-9740-5969-9387-0b9a07468f6e", 00:19:18.681 "is_configured": true, 00:19:18.681 "data_offset": 256, 00:19:18.681 "data_size": 7936 00:19:18.681 }, 00:19:18.681 { 00:19:18.681 "name": "BaseBdev2", 00:19:18.681 "uuid": "06aef473-37f6-5ce2-8989-34c3ffdf9a84", 00:19:18.681 "is_configured": true, 00:19:18.681 "data_offset": 256, 00:19:18.681 "data_size": 7936 00:19:18.681 } 00:19:18.681 ] 00:19:18.681 }' 00:19:18.681 21:46:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:18.940 21:46:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:18.940 21:46:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:18.940 21:46:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:18.940 21:46:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:19.509 [2024-12-10 21:46:20.224722] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:19:19.509 [2024-12-10 21:46:20.224793] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:19:19.509 [2024-12-10 21:46:20.224910] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:19.769 21:46:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:19.769 21:46:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:19.769 21:46:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:19.769 21:46:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:19.769 21:46:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:19.769 21:46:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:19.769 21:46:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:19.769 21:46:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:19.769 21:46:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.769 21:46:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:20.029 21:46:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.029 21:46:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:20.029 "name": "raid_bdev1", 00:19:20.029 "uuid": "a76f2aab-ed40-436b-bad2-fc2f363c9f67", 00:19:20.029 "strip_size_kb": 0, 00:19:20.029 "state": "online", 00:19:20.029 "raid_level": "raid1", 00:19:20.029 "superblock": true, 00:19:20.029 "num_base_bdevs": 2, 00:19:20.029 "num_base_bdevs_discovered": 2, 00:19:20.029 "num_base_bdevs_operational": 2, 00:19:20.029 "base_bdevs_list": [ 00:19:20.029 { 00:19:20.029 "name": "spare", 00:19:20.029 "uuid": "fb34d5ec-9740-5969-9387-0b9a07468f6e", 00:19:20.029 "is_configured": true, 00:19:20.029 "data_offset": 256, 00:19:20.029 "data_size": 7936 00:19:20.029 }, 00:19:20.029 { 00:19:20.029 "name": "BaseBdev2", 00:19:20.029 "uuid": "06aef473-37f6-5ce2-8989-34c3ffdf9a84", 00:19:20.029 "is_configured": true, 00:19:20.029 "data_offset": 256, 00:19:20.029 "data_size": 7936 00:19:20.029 } 00:19:20.029 ] 00:19:20.029 }' 00:19:20.029 21:46:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:20.029 21:46:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:19:20.029 21:46:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:20.029 21:46:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:19:20.029 21:46:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@709 -- # break 00:19:20.029 21:46:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:20.029 21:46:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:20.029 21:46:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:20.030 21:46:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:20.030 21:46:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:20.030 21:46:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:20.030 21:46:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.030 21:46:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:20.030 21:46:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:20.030 21:46:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.030 21:46:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:20.030 "name": "raid_bdev1", 00:19:20.030 "uuid": "a76f2aab-ed40-436b-bad2-fc2f363c9f67", 00:19:20.030 "strip_size_kb": 0, 00:19:20.030 "state": "online", 00:19:20.030 "raid_level": "raid1", 00:19:20.030 "superblock": true, 00:19:20.030 "num_base_bdevs": 2, 00:19:20.030 "num_base_bdevs_discovered": 2, 00:19:20.030 "num_base_bdevs_operational": 2, 00:19:20.030 "base_bdevs_list": [ 00:19:20.030 { 00:19:20.030 "name": "spare", 00:19:20.030 "uuid": "fb34d5ec-9740-5969-9387-0b9a07468f6e", 00:19:20.030 "is_configured": true, 00:19:20.030 "data_offset": 256, 00:19:20.030 "data_size": 7936 00:19:20.030 }, 00:19:20.030 { 00:19:20.030 "name": "BaseBdev2", 00:19:20.030 "uuid": "06aef473-37f6-5ce2-8989-34c3ffdf9a84", 00:19:20.030 "is_configured": true, 00:19:20.030 "data_offset": 256, 00:19:20.030 "data_size": 7936 00:19:20.030 } 00:19:20.030 ] 00:19:20.030 }' 00:19:20.030 21:46:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:20.030 21:46:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:20.030 21:46:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:20.289 21:46:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:20.289 21:46:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:20.289 21:46:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:20.289 21:46:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:20.289 21:46:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:20.289 21:46:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:20.289 21:46:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:20.289 21:46:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:20.289 21:46:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:20.289 21:46:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:20.289 21:46:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:20.289 21:46:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:20.289 21:46:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:20.289 21:46:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.290 21:46:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:20.290 21:46:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.290 21:46:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:20.290 "name": "raid_bdev1", 00:19:20.290 "uuid": "a76f2aab-ed40-436b-bad2-fc2f363c9f67", 00:19:20.290 "strip_size_kb": 0, 00:19:20.290 "state": "online", 00:19:20.290 "raid_level": "raid1", 00:19:20.290 "superblock": true, 00:19:20.290 "num_base_bdevs": 2, 00:19:20.290 "num_base_bdevs_discovered": 2, 00:19:20.290 "num_base_bdevs_operational": 2, 00:19:20.290 "base_bdevs_list": [ 00:19:20.290 { 00:19:20.290 "name": "spare", 00:19:20.290 "uuid": "fb34d5ec-9740-5969-9387-0b9a07468f6e", 00:19:20.290 "is_configured": true, 00:19:20.290 "data_offset": 256, 00:19:20.290 "data_size": 7936 00:19:20.290 }, 00:19:20.290 { 00:19:20.290 "name": "BaseBdev2", 00:19:20.290 "uuid": "06aef473-37f6-5ce2-8989-34c3ffdf9a84", 00:19:20.290 "is_configured": true, 00:19:20.290 "data_offset": 256, 00:19:20.290 "data_size": 7936 00:19:20.290 } 00:19:20.290 ] 00:19:20.290 }' 00:19:20.290 21:46:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:20.290 21:46:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:20.549 21:46:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:20.549 21:46:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.549 21:46:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:20.549 [2024-12-10 21:46:21.269164] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:20.549 [2024-12-10 21:46:21.269247] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:20.549 [2024-12-10 21:46:21.269341] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:20.550 [2024-12-10 21:46:21.269434] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:20.550 [2024-12-10 21:46:21.269488] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:19:20.550 21:46:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.550 21:46:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # jq length 00:19:20.550 21:46:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:20.550 21:46:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.550 21:46:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:20.550 21:46:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.550 21:46:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:19:20.550 21:46:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:19:20.550 21:46:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:19:20.550 21:46:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:19:20.550 21:46:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:19:20.550 21:46:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:19:20.550 21:46:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:20.550 21:46:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:20.550 21:46:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:20.550 21:46:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:19:20.550 21:46:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:20.550 21:46:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:20.550 21:46:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:19:20.809 /dev/nbd0 00:19:20.809 21:46:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:20.809 21:46:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:20.809 21:46:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:19:20.809 21:46:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:19:20.809 21:46:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:20.809 21:46:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:20.809 21:46:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:19:20.809 21:46:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:19:20.809 21:46:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:20.809 21:46:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:20.809 21:46:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:20.809 1+0 records in 00:19:20.809 1+0 records out 00:19:20.809 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000448994 s, 9.1 MB/s 00:19:20.810 21:46:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:20.810 21:46:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:19:20.810 21:46:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:20.810 21:46:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:20.810 21:46:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:19:20.810 21:46:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:20.810 21:46:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:20.810 21:46:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:19:21.069 /dev/nbd1 00:19:21.069 21:46:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:19:21.069 21:46:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:19:21.069 21:46:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:19:21.069 21:46:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:19:21.069 21:46:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:21.069 21:46:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:21.069 21:46:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:19:21.069 21:46:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:19:21.069 21:46:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:21.069 21:46:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:21.070 21:46:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:21.070 1+0 records in 00:19:21.070 1+0 records out 00:19:21.070 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000271914 s, 15.1 MB/s 00:19:21.070 21:46:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:21.070 21:46:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:19:21.070 21:46:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:21.070 21:46:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:21.070 21:46:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:19:21.070 21:46:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:21.070 21:46:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:21.070 21:46:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:19:21.329 21:46:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:19:21.329 21:46:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:19:21.329 21:46:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:21.329 21:46:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:21.329 21:46:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:19:21.329 21:46:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:21.329 21:46:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:19:21.589 21:46:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:21.589 21:46:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:21.589 21:46:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:21.589 21:46:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:21.589 21:46:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:21.589 21:46:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:21.589 21:46:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:19:21.589 21:46:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:19:21.589 21:46:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:21.589 21:46:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:19:21.849 21:46:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:19:21.849 21:46:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:19:21.849 21:46:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:19:21.849 21:46:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:21.849 21:46:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:21.849 21:46:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:19:21.849 21:46:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:19:21.849 21:46:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:19:21.849 21:46:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:19:21.849 21:46:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:19:21.849 21:46:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.849 21:46:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:21.849 21:46:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.849 21:46:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:21.849 21:46:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.849 21:46:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:21.849 [2024-12-10 21:46:22.436058] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:21.849 [2024-12-10 21:46:22.436182] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:21.849 [2024-12-10 21:46:22.436212] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:19:21.849 [2024-12-10 21:46:22.436222] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:21.849 [2024-12-10 21:46:22.438354] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:21.849 [2024-12-10 21:46:22.438393] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:21.849 [2024-12-10 21:46:22.438500] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:19:21.849 [2024-12-10 21:46:22.438560] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:21.849 [2024-12-10 21:46:22.438710] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:21.849 spare 00:19:21.849 21:46:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.849 21:46:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:19:21.849 21:46:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.849 21:46:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:21.849 [2024-12-10 21:46:22.538618] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:19:21.849 [2024-12-10 21:46:22.538701] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:19:21.849 [2024-12-10 21:46:22.538994] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:19:21.849 [2024-12-10 21:46:22.539197] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:19:21.849 [2024-12-10 21:46:22.539208] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:19:21.849 [2024-12-10 21:46:22.539384] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:21.849 21:46:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.849 21:46:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:21.849 21:46:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:21.849 21:46:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:21.849 21:46:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:21.849 21:46:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:21.849 21:46:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:21.849 21:46:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:21.849 21:46:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:21.849 21:46:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:21.849 21:46:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:21.849 21:46:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:21.849 21:46:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:21.849 21:46:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.849 21:46:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:21.849 21:46:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.849 21:46:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:21.849 "name": "raid_bdev1", 00:19:21.849 "uuid": "a76f2aab-ed40-436b-bad2-fc2f363c9f67", 00:19:21.849 "strip_size_kb": 0, 00:19:21.849 "state": "online", 00:19:21.849 "raid_level": "raid1", 00:19:21.849 "superblock": true, 00:19:21.849 "num_base_bdevs": 2, 00:19:21.849 "num_base_bdevs_discovered": 2, 00:19:21.849 "num_base_bdevs_operational": 2, 00:19:21.849 "base_bdevs_list": [ 00:19:21.849 { 00:19:21.849 "name": "spare", 00:19:21.849 "uuid": "fb34d5ec-9740-5969-9387-0b9a07468f6e", 00:19:21.849 "is_configured": true, 00:19:21.849 "data_offset": 256, 00:19:21.849 "data_size": 7936 00:19:21.849 }, 00:19:21.849 { 00:19:21.849 "name": "BaseBdev2", 00:19:21.849 "uuid": "06aef473-37f6-5ce2-8989-34c3ffdf9a84", 00:19:21.849 "is_configured": true, 00:19:21.849 "data_offset": 256, 00:19:21.849 "data_size": 7936 00:19:21.849 } 00:19:21.849 ] 00:19:21.849 }' 00:19:21.849 21:46:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:21.849 21:46:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:22.419 21:46:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:22.419 21:46:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:22.419 21:46:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:22.419 21:46:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:22.419 21:46:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:22.419 21:46:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:22.419 21:46:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:22.419 21:46:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.419 21:46:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:22.419 21:46:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.419 21:46:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:22.419 "name": "raid_bdev1", 00:19:22.419 "uuid": "a76f2aab-ed40-436b-bad2-fc2f363c9f67", 00:19:22.419 "strip_size_kb": 0, 00:19:22.419 "state": "online", 00:19:22.419 "raid_level": "raid1", 00:19:22.419 "superblock": true, 00:19:22.419 "num_base_bdevs": 2, 00:19:22.419 "num_base_bdevs_discovered": 2, 00:19:22.419 "num_base_bdevs_operational": 2, 00:19:22.419 "base_bdevs_list": [ 00:19:22.419 { 00:19:22.419 "name": "spare", 00:19:22.419 "uuid": "fb34d5ec-9740-5969-9387-0b9a07468f6e", 00:19:22.419 "is_configured": true, 00:19:22.419 "data_offset": 256, 00:19:22.419 "data_size": 7936 00:19:22.419 }, 00:19:22.419 { 00:19:22.419 "name": "BaseBdev2", 00:19:22.419 "uuid": "06aef473-37f6-5ce2-8989-34c3ffdf9a84", 00:19:22.419 "is_configured": true, 00:19:22.419 "data_offset": 256, 00:19:22.419 "data_size": 7936 00:19:22.419 } 00:19:22.419 ] 00:19:22.419 }' 00:19:22.419 21:46:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:22.419 21:46:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:22.419 21:46:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:22.419 21:46:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:22.419 21:46:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:19:22.419 21:46:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:22.419 21:46:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.419 21:46:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:22.419 21:46:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.419 21:46:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:19:22.419 21:46:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:19:22.419 21:46:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.419 21:46:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:22.419 [2024-12-10 21:46:23.162886] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:22.419 21:46:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.419 21:46:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:22.419 21:46:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:22.419 21:46:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:22.419 21:46:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:22.419 21:46:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:22.419 21:46:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:22.419 21:46:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:22.419 21:46:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:22.419 21:46:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:22.419 21:46:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:22.419 21:46:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:22.419 21:46:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:22.419 21:46:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.419 21:46:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:22.419 21:46:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.678 21:46:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:22.678 "name": "raid_bdev1", 00:19:22.678 "uuid": "a76f2aab-ed40-436b-bad2-fc2f363c9f67", 00:19:22.678 "strip_size_kb": 0, 00:19:22.678 "state": "online", 00:19:22.678 "raid_level": "raid1", 00:19:22.678 "superblock": true, 00:19:22.678 "num_base_bdevs": 2, 00:19:22.678 "num_base_bdevs_discovered": 1, 00:19:22.678 "num_base_bdevs_operational": 1, 00:19:22.678 "base_bdevs_list": [ 00:19:22.678 { 00:19:22.678 "name": null, 00:19:22.678 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:22.678 "is_configured": false, 00:19:22.678 "data_offset": 0, 00:19:22.678 "data_size": 7936 00:19:22.678 }, 00:19:22.678 { 00:19:22.678 "name": "BaseBdev2", 00:19:22.678 "uuid": "06aef473-37f6-5ce2-8989-34c3ffdf9a84", 00:19:22.678 "is_configured": true, 00:19:22.678 "data_offset": 256, 00:19:22.678 "data_size": 7936 00:19:22.678 } 00:19:22.678 ] 00:19:22.678 }' 00:19:22.678 21:46:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:22.678 21:46:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:22.938 21:46:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:22.938 21:46:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.938 21:46:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:22.938 [2024-12-10 21:46:23.574211] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:22.938 [2024-12-10 21:46:23.574496] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:19:22.938 [2024-12-10 21:46:23.574560] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:19:22.938 [2024-12-10 21:46:23.574618] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:22.938 [2024-12-10 21:46:23.591194] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:19:22.938 21:46:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.938 21:46:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@757 -- # sleep 1 00:19:22.938 [2024-12-10 21:46:23.593282] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:23.884 21:46:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:23.884 21:46:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:23.884 21:46:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:23.884 21:46:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:23.884 21:46:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:23.884 21:46:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:23.884 21:46:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:23.884 21:46:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.884 21:46:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:23.884 21:46:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.884 21:46:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:23.884 "name": "raid_bdev1", 00:19:23.884 "uuid": "a76f2aab-ed40-436b-bad2-fc2f363c9f67", 00:19:23.884 "strip_size_kb": 0, 00:19:23.884 "state": "online", 00:19:23.884 "raid_level": "raid1", 00:19:23.884 "superblock": true, 00:19:23.884 "num_base_bdevs": 2, 00:19:23.884 "num_base_bdevs_discovered": 2, 00:19:23.884 "num_base_bdevs_operational": 2, 00:19:23.884 "process": { 00:19:23.884 "type": "rebuild", 00:19:23.884 "target": "spare", 00:19:23.884 "progress": { 00:19:23.884 "blocks": 2560, 00:19:23.884 "percent": 32 00:19:23.884 } 00:19:23.884 }, 00:19:23.884 "base_bdevs_list": [ 00:19:23.884 { 00:19:23.884 "name": "spare", 00:19:23.884 "uuid": "fb34d5ec-9740-5969-9387-0b9a07468f6e", 00:19:23.884 "is_configured": true, 00:19:23.884 "data_offset": 256, 00:19:23.884 "data_size": 7936 00:19:23.884 }, 00:19:23.884 { 00:19:23.884 "name": "BaseBdev2", 00:19:23.884 "uuid": "06aef473-37f6-5ce2-8989-34c3ffdf9a84", 00:19:23.884 "is_configured": true, 00:19:23.884 "data_offset": 256, 00:19:23.884 "data_size": 7936 00:19:23.884 } 00:19:23.884 ] 00:19:23.884 }' 00:19:23.884 21:46:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:24.143 21:46:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:24.143 21:46:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:24.143 21:46:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:24.143 21:46:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:19:24.143 21:46:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.143 21:46:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:24.143 [2024-12-10 21:46:24.752841] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:24.143 [2024-12-10 21:46:24.798292] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:24.143 [2024-12-10 21:46:24.798371] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:24.143 [2024-12-10 21:46:24.798385] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:24.143 [2024-12-10 21:46:24.798394] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:24.143 21:46:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.143 21:46:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:24.143 21:46:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:24.143 21:46:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:24.143 21:46:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:24.143 21:46:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:24.143 21:46:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:24.143 21:46:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:24.143 21:46:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:24.143 21:46:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:24.143 21:46:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:24.143 21:46:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:24.143 21:46:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:24.143 21:46:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.143 21:46:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:24.143 21:46:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.143 21:46:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:24.143 "name": "raid_bdev1", 00:19:24.143 "uuid": "a76f2aab-ed40-436b-bad2-fc2f363c9f67", 00:19:24.143 "strip_size_kb": 0, 00:19:24.143 "state": "online", 00:19:24.143 "raid_level": "raid1", 00:19:24.143 "superblock": true, 00:19:24.143 "num_base_bdevs": 2, 00:19:24.143 "num_base_bdevs_discovered": 1, 00:19:24.143 "num_base_bdevs_operational": 1, 00:19:24.143 "base_bdevs_list": [ 00:19:24.143 { 00:19:24.143 "name": null, 00:19:24.143 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:24.143 "is_configured": false, 00:19:24.143 "data_offset": 0, 00:19:24.143 "data_size": 7936 00:19:24.143 }, 00:19:24.143 { 00:19:24.143 "name": "BaseBdev2", 00:19:24.143 "uuid": "06aef473-37f6-5ce2-8989-34c3ffdf9a84", 00:19:24.143 "is_configured": true, 00:19:24.143 "data_offset": 256, 00:19:24.143 "data_size": 7936 00:19:24.143 } 00:19:24.143 ] 00:19:24.143 }' 00:19:24.143 21:46:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:24.143 21:46:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:24.709 21:46:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:24.709 21:46:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.709 21:46:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:24.709 [2024-12-10 21:46:25.269129] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:24.709 [2024-12-10 21:46:25.269254] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:24.709 [2024-12-10 21:46:25.269296] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:19:24.709 [2024-12-10 21:46:25.269330] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:24.709 [2024-12-10 21:46:25.269859] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:24.709 [2024-12-10 21:46:25.269925] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:24.709 [2024-12-10 21:46:25.270051] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:19:24.709 [2024-12-10 21:46:25.270094] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:19:24.709 [2024-12-10 21:46:25.270139] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:19:24.709 [2024-12-10 21:46:25.270222] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:24.709 [2024-12-10 21:46:25.286311] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1cf0 00:19:24.709 spare 00:19:24.709 21:46:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.709 [2024-12-10 21:46:25.288103] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:24.709 21:46:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@764 -- # sleep 1 00:19:25.646 21:46:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:25.646 21:46:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:25.646 21:46:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:25.646 21:46:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:25.646 21:46:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:25.646 21:46:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:25.646 21:46:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:25.646 21:46:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.646 21:46:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:25.646 21:46:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.646 21:46:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:25.646 "name": "raid_bdev1", 00:19:25.646 "uuid": "a76f2aab-ed40-436b-bad2-fc2f363c9f67", 00:19:25.646 "strip_size_kb": 0, 00:19:25.646 "state": "online", 00:19:25.646 "raid_level": "raid1", 00:19:25.646 "superblock": true, 00:19:25.646 "num_base_bdevs": 2, 00:19:25.646 "num_base_bdevs_discovered": 2, 00:19:25.646 "num_base_bdevs_operational": 2, 00:19:25.646 "process": { 00:19:25.646 "type": "rebuild", 00:19:25.646 "target": "spare", 00:19:25.646 "progress": { 00:19:25.646 "blocks": 2560, 00:19:25.646 "percent": 32 00:19:25.646 } 00:19:25.646 }, 00:19:25.646 "base_bdevs_list": [ 00:19:25.646 { 00:19:25.646 "name": "spare", 00:19:25.646 "uuid": "fb34d5ec-9740-5969-9387-0b9a07468f6e", 00:19:25.646 "is_configured": true, 00:19:25.646 "data_offset": 256, 00:19:25.646 "data_size": 7936 00:19:25.646 }, 00:19:25.646 { 00:19:25.646 "name": "BaseBdev2", 00:19:25.646 "uuid": "06aef473-37f6-5ce2-8989-34c3ffdf9a84", 00:19:25.646 "is_configured": true, 00:19:25.646 "data_offset": 256, 00:19:25.646 "data_size": 7936 00:19:25.646 } 00:19:25.646 ] 00:19:25.646 }' 00:19:25.646 21:46:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:25.646 21:46:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:25.646 21:46:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:25.906 21:46:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:25.906 21:46:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:19:25.906 21:46:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.906 21:46:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:25.906 [2024-12-10 21:46:26.444228] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:25.906 [2024-12-10 21:46:26.493029] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:25.906 [2024-12-10 21:46:26.493151] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:25.906 [2024-12-10 21:46:26.493173] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:25.906 [2024-12-10 21:46:26.493181] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:25.906 21:46:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.906 21:46:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:25.906 21:46:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:25.906 21:46:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:25.906 21:46:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:25.906 21:46:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:25.906 21:46:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:25.906 21:46:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:25.906 21:46:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:25.906 21:46:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:25.906 21:46:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:25.906 21:46:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:25.906 21:46:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:25.906 21:46:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.906 21:46:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:25.906 21:46:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.906 21:46:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:25.906 "name": "raid_bdev1", 00:19:25.906 "uuid": "a76f2aab-ed40-436b-bad2-fc2f363c9f67", 00:19:25.906 "strip_size_kb": 0, 00:19:25.906 "state": "online", 00:19:25.906 "raid_level": "raid1", 00:19:25.906 "superblock": true, 00:19:25.906 "num_base_bdevs": 2, 00:19:25.906 "num_base_bdevs_discovered": 1, 00:19:25.906 "num_base_bdevs_operational": 1, 00:19:25.906 "base_bdevs_list": [ 00:19:25.906 { 00:19:25.906 "name": null, 00:19:25.906 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:25.906 "is_configured": false, 00:19:25.906 "data_offset": 0, 00:19:25.906 "data_size": 7936 00:19:25.906 }, 00:19:25.906 { 00:19:25.906 "name": "BaseBdev2", 00:19:25.906 "uuid": "06aef473-37f6-5ce2-8989-34c3ffdf9a84", 00:19:25.906 "is_configured": true, 00:19:25.906 "data_offset": 256, 00:19:25.906 "data_size": 7936 00:19:25.906 } 00:19:25.906 ] 00:19:25.906 }' 00:19:25.906 21:46:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:25.906 21:46:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:26.476 21:46:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:26.476 21:46:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:26.476 21:46:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:26.476 21:46:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:26.476 21:46:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:26.476 21:46:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:26.476 21:46:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:26.476 21:46:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.476 21:46:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:26.476 21:46:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.476 21:46:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:26.476 "name": "raid_bdev1", 00:19:26.476 "uuid": "a76f2aab-ed40-436b-bad2-fc2f363c9f67", 00:19:26.476 "strip_size_kb": 0, 00:19:26.476 "state": "online", 00:19:26.476 "raid_level": "raid1", 00:19:26.476 "superblock": true, 00:19:26.476 "num_base_bdevs": 2, 00:19:26.476 "num_base_bdevs_discovered": 1, 00:19:26.476 "num_base_bdevs_operational": 1, 00:19:26.476 "base_bdevs_list": [ 00:19:26.476 { 00:19:26.476 "name": null, 00:19:26.476 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:26.476 "is_configured": false, 00:19:26.476 "data_offset": 0, 00:19:26.476 "data_size": 7936 00:19:26.476 }, 00:19:26.476 { 00:19:26.476 "name": "BaseBdev2", 00:19:26.476 "uuid": "06aef473-37f6-5ce2-8989-34c3ffdf9a84", 00:19:26.476 "is_configured": true, 00:19:26.476 "data_offset": 256, 00:19:26.476 "data_size": 7936 00:19:26.476 } 00:19:26.476 ] 00:19:26.476 }' 00:19:26.476 21:46:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:26.476 21:46:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:26.476 21:46:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:26.476 21:46:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:26.476 21:46:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:19:26.476 21:46:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.476 21:46:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:26.476 21:46:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.476 21:46:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:19:26.476 21:46:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.476 21:46:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:26.476 [2024-12-10 21:46:27.128279] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:19:26.476 [2024-12-10 21:46:27.128338] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:26.476 [2024-12-10 21:46:27.128361] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:19:26.476 [2024-12-10 21:46:27.128378] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:26.476 [2024-12-10 21:46:27.128825] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:26.476 [2024-12-10 21:46:27.128849] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:26.476 [2024-12-10 21:46:27.128943] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:19:26.476 [2024-12-10 21:46:27.128956] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:19:26.476 [2024-12-10 21:46:27.128967] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:19:26.476 [2024-12-10 21:46:27.128977] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:19:26.476 BaseBdev1 00:19:26.476 21:46:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.476 21:46:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@775 -- # sleep 1 00:19:27.419 21:46:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:27.419 21:46:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:27.419 21:46:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:27.419 21:46:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:27.419 21:46:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:27.419 21:46:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:27.419 21:46:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:27.419 21:46:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:27.419 21:46:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:27.419 21:46:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:27.419 21:46:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:27.419 21:46:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:27.419 21:46:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.419 21:46:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:27.419 21:46:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.419 21:46:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:27.419 "name": "raid_bdev1", 00:19:27.419 "uuid": "a76f2aab-ed40-436b-bad2-fc2f363c9f67", 00:19:27.419 "strip_size_kb": 0, 00:19:27.419 "state": "online", 00:19:27.419 "raid_level": "raid1", 00:19:27.419 "superblock": true, 00:19:27.419 "num_base_bdevs": 2, 00:19:27.419 "num_base_bdevs_discovered": 1, 00:19:27.419 "num_base_bdevs_operational": 1, 00:19:27.419 "base_bdevs_list": [ 00:19:27.419 { 00:19:27.419 "name": null, 00:19:27.419 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:27.419 "is_configured": false, 00:19:27.419 "data_offset": 0, 00:19:27.419 "data_size": 7936 00:19:27.419 }, 00:19:27.419 { 00:19:27.419 "name": "BaseBdev2", 00:19:27.419 "uuid": "06aef473-37f6-5ce2-8989-34c3ffdf9a84", 00:19:27.419 "is_configured": true, 00:19:27.419 "data_offset": 256, 00:19:27.419 "data_size": 7936 00:19:27.419 } 00:19:27.419 ] 00:19:27.419 }' 00:19:27.419 21:46:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:27.419 21:46:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:27.998 21:46:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:27.998 21:46:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:27.998 21:46:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:27.998 21:46:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:27.998 21:46:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:27.998 21:46:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:27.998 21:46:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:27.998 21:46:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.998 21:46:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:27.998 21:46:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.998 21:46:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:27.998 "name": "raid_bdev1", 00:19:27.998 "uuid": "a76f2aab-ed40-436b-bad2-fc2f363c9f67", 00:19:27.999 "strip_size_kb": 0, 00:19:27.999 "state": "online", 00:19:27.999 "raid_level": "raid1", 00:19:27.999 "superblock": true, 00:19:27.999 "num_base_bdevs": 2, 00:19:27.999 "num_base_bdevs_discovered": 1, 00:19:27.999 "num_base_bdevs_operational": 1, 00:19:27.999 "base_bdevs_list": [ 00:19:27.999 { 00:19:27.999 "name": null, 00:19:27.999 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:27.999 "is_configured": false, 00:19:27.999 "data_offset": 0, 00:19:27.999 "data_size": 7936 00:19:27.999 }, 00:19:27.999 { 00:19:27.999 "name": "BaseBdev2", 00:19:27.999 "uuid": "06aef473-37f6-5ce2-8989-34c3ffdf9a84", 00:19:27.999 "is_configured": true, 00:19:27.999 "data_offset": 256, 00:19:27.999 "data_size": 7936 00:19:27.999 } 00:19:27.999 ] 00:19:27.999 }' 00:19:27.999 21:46:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:27.999 21:46:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:27.999 21:46:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:27.999 21:46:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:27.999 21:46:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:27.999 21:46:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@652 -- # local es=0 00:19:27.999 21:46:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:27.999 21:46:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:19:27.999 21:46:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:27.999 21:46:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:19:27.999 21:46:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:27.999 21:46:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:27.999 21:46:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.999 21:46:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:27.999 [2024-12-10 21:46:28.702436] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:27.999 [2024-12-10 21:46:28.702669] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:19:27.999 [2024-12-10 21:46:28.702690] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:19:27.999 request: 00:19:27.999 { 00:19:27.999 "base_bdev": "BaseBdev1", 00:19:27.999 "raid_bdev": "raid_bdev1", 00:19:27.999 "method": "bdev_raid_add_base_bdev", 00:19:27.999 "req_id": 1 00:19:27.999 } 00:19:27.999 Got JSON-RPC error response 00:19:27.999 response: 00:19:27.999 { 00:19:27.999 "code": -22, 00:19:27.999 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:19:27.999 } 00:19:27.999 21:46:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:19:27.999 21:46:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@655 -- # es=1 00:19:27.999 21:46:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:27.999 21:46:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:27.999 21:46:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:27.999 21:46:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@779 -- # sleep 1 00:19:28.938 21:46:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:28.938 21:46:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:28.938 21:46:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:28.938 21:46:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:28.938 21:46:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:28.938 21:46:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:28.938 21:46:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:28.938 21:46:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:29.198 21:46:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:29.198 21:46:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:29.198 21:46:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:29.198 21:46:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:29.198 21:46:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.198 21:46:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:29.198 21:46:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.198 21:46:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:29.198 "name": "raid_bdev1", 00:19:29.198 "uuid": "a76f2aab-ed40-436b-bad2-fc2f363c9f67", 00:19:29.198 "strip_size_kb": 0, 00:19:29.198 "state": "online", 00:19:29.198 "raid_level": "raid1", 00:19:29.198 "superblock": true, 00:19:29.198 "num_base_bdevs": 2, 00:19:29.198 "num_base_bdevs_discovered": 1, 00:19:29.198 "num_base_bdevs_operational": 1, 00:19:29.198 "base_bdevs_list": [ 00:19:29.198 { 00:19:29.198 "name": null, 00:19:29.198 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:29.198 "is_configured": false, 00:19:29.198 "data_offset": 0, 00:19:29.198 "data_size": 7936 00:19:29.198 }, 00:19:29.198 { 00:19:29.198 "name": "BaseBdev2", 00:19:29.198 "uuid": "06aef473-37f6-5ce2-8989-34c3ffdf9a84", 00:19:29.198 "is_configured": true, 00:19:29.198 "data_offset": 256, 00:19:29.198 "data_size": 7936 00:19:29.198 } 00:19:29.198 ] 00:19:29.198 }' 00:19:29.198 21:46:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:29.198 21:46:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:29.458 21:46:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:29.458 21:46:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:29.458 21:46:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:29.458 21:46:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:29.458 21:46:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:29.458 21:46:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:29.458 21:46:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.458 21:46:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:29.458 21:46:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:29.458 21:46:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.458 21:46:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:29.458 "name": "raid_bdev1", 00:19:29.458 "uuid": "a76f2aab-ed40-436b-bad2-fc2f363c9f67", 00:19:29.458 "strip_size_kb": 0, 00:19:29.458 "state": "online", 00:19:29.458 "raid_level": "raid1", 00:19:29.458 "superblock": true, 00:19:29.458 "num_base_bdevs": 2, 00:19:29.458 "num_base_bdevs_discovered": 1, 00:19:29.458 "num_base_bdevs_operational": 1, 00:19:29.458 "base_bdevs_list": [ 00:19:29.458 { 00:19:29.458 "name": null, 00:19:29.458 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:29.458 "is_configured": false, 00:19:29.458 "data_offset": 0, 00:19:29.458 "data_size": 7936 00:19:29.458 }, 00:19:29.458 { 00:19:29.458 "name": "BaseBdev2", 00:19:29.458 "uuid": "06aef473-37f6-5ce2-8989-34c3ffdf9a84", 00:19:29.458 "is_configured": true, 00:19:29.458 "data_offset": 256, 00:19:29.458 "data_size": 7936 00:19:29.458 } 00:19:29.458 ] 00:19:29.458 }' 00:19:29.458 21:46:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:29.717 21:46:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:29.717 21:46:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:29.717 21:46:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:29.717 21:46:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@784 -- # killprocess 86668 00:19:29.717 21:46:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@954 -- # '[' -z 86668 ']' 00:19:29.717 21:46:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@958 -- # kill -0 86668 00:19:29.717 21:46:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@959 -- # uname 00:19:29.717 21:46:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:29.717 21:46:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86668 00:19:29.717 21:46:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:29.717 21:46:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:29.717 killing process with pid 86668 00:19:29.717 Received shutdown signal, test time was about 60.000000 seconds 00:19:29.717 00:19:29.717 Latency(us) 00:19:29.717 [2024-12-10T21:46:30.500Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:29.717 [2024-12-10T21:46:30.500Z] =================================================================================================================== 00:19:29.717 [2024-12-10T21:46:30.500Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:29.717 21:46:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86668' 00:19:29.717 21:46:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@973 -- # kill 86668 00:19:29.717 [2024-12-10 21:46:30.342572] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:29.717 [2024-12-10 21:46:30.342689] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:29.717 [2024-12-10 21:46:30.342736] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:29.717 [2024-12-10 21:46:30.342746] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:19:29.717 21:46:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@978 -- # wait 86668 00:19:29.976 [2024-12-10 21:46:30.629405] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:31.357 21:46:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@786 -- # return 0 00:19:31.357 00:19:31.357 real 0m19.546s 00:19:31.357 user 0m25.464s 00:19:31.357 sys 0m2.446s 00:19:31.357 21:46:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:31.357 21:46:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:31.357 ************************************ 00:19:31.357 END TEST raid_rebuild_test_sb_4k 00:19:31.357 ************************************ 00:19:31.357 21:46:31 bdev_raid -- bdev/bdev_raid.sh@1003 -- # base_malloc_params='-m 32' 00:19:31.357 21:46:31 bdev_raid -- bdev/bdev_raid.sh@1004 -- # run_test raid_state_function_test_sb_md_separate raid_state_function_test raid1 2 true 00:19:31.357 21:46:31 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:19:31.357 21:46:31 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:31.357 21:46:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:31.357 ************************************ 00:19:31.357 START TEST raid_state_function_test_sb_md_separate 00:19:31.357 ************************************ 00:19:31.357 21:46:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:19:31.357 21:46:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:19:31.357 21:46:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:19:31.357 21:46:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:19:31.357 21:46:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:19:31.357 21:46:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:19:31.357 21:46:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:31.357 21:46:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:19:31.357 21:46:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:19:31.357 21:46:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:31.357 21:46:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:19:31.357 21:46:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:19:31.357 21:46:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:31.357 21:46:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:19:31.357 21:46:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:19:31.357 21:46:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:19:31.357 21:46:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # local strip_size 00:19:31.357 21:46:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:19:31.357 21:46:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:19:31.357 21:46:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:19:31.357 21:46:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:19:31.357 21:46:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:19:31.357 21:46:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:19:31.357 21:46:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@229 -- # raid_pid=87354 00:19:31.357 21:46:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:19:31.357 21:46:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 87354' 00:19:31.357 Process raid pid: 87354 00:19:31.357 21:46:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@231 -- # waitforlisten 87354 00:19:31.357 21:46:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@835 -- # '[' -z 87354 ']' 00:19:31.357 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:31.357 21:46:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:31.357 21:46:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:31.357 21:46:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:31.357 21:46:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:31.357 21:46:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:31.357 [2024-12-10 21:46:31.875622] Starting SPDK v25.01-pre git sha1 cec5ba284 / DPDK 24.03.0 initialization... 00:19:31.357 [2024-12-10 21:46:31.875736] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:31.357 [2024-12-10 21:46:32.042433] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:31.617 [2024-12-10 21:46:32.152397] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:19:31.617 [2024-12-10 21:46:32.352679] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:31.617 [2024-12-10 21:46:32.352767] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:32.187 21:46:32 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:32.187 21:46:32 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@868 -- # return 0 00:19:32.187 21:46:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:19:32.187 21:46:32 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.187 21:46:32 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:32.187 [2024-12-10 21:46:32.691300] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:32.187 [2024-12-10 21:46:32.691411] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:32.187 [2024-12-10 21:46:32.691435] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:32.187 [2024-12-10 21:46:32.691445] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:32.187 21:46:32 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.187 21:46:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:19:32.187 21:46:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:32.187 21:46:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:32.187 21:46:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:32.187 21:46:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:32.187 21:46:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:32.187 21:46:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:32.187 21:46:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:32.187 21:46:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:32.187 21:46:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:32.187 21:46:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:32.187 21:46:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:32.187 21:46:32 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.187 21:46:32 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:32.188 21:46:32 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.188 21:46:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:32.188 "name": "Existed_Raid", 00:19:32.188 "uuid": "435b2351-9319-44ba-b425-46d3814eee5b", 00:19:32.188 "strip_size_kb": 0, 00:19:32.188 "state": "configuring", 00:19:32.188 "raid_level": "raid1", 00:19:32.188 "superblock": true, 00:19:32.188 "num_base_bdevs": 2, 00:19:32.188 "num_base_bdevs_discovered": 0, 00:19:32.188 "num_base_bdevs_operational": 2, 00:19:32.188 "base_bdevs_list": [ 00:19:32.188 { 00:19:32.188 "name": "BaseBdev1", 00:19:32.188 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:32.188 "is_configured": false, 00:19:32.188 "data_offset": 0, 00:19:32.188 "data_size": 0 00:19:32.188 }, 00:19:32.188 { 00:19:32.188 "name": "BaseBdev2", 00:19:32.188 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:32.188 "is_configured": false, 00:19:32.188 "data_offset": 0, 00:19:32.188 "data_size": 0 00:19:32.188 } 00:19:32.188 ] 00:19:32.188 }' 00:19:32.188 21:46:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:32.188 21:46:32 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:32.446 21:46:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:19:32.446 21:46:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.446 21:46:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:32.446 [2024-12-10 21:46:33.138551] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:32.446 [2024-12-10 21:46:33.138634] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:19:32.446 21:46:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.446 21:46:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:19:32.446 21:46:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.446 21:46:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:32.446 [2024-12-10 21:46:33.150525] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:32.446 [2024-12-10 21:46:33.150616] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:32.446 [2024-12-10 21:46:33.150645] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:32.446 [2024-12-10 21:46:33.150670] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:32.446 21:46:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.446 21:46:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1 00:19:32.446 21:46:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.446 21:46:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:32.446 [2024-12-10 21:46:33.197599] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:32.446 BaseBdev1 00:19:32.446 21:46:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.446 21:46:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:19:32.446 21:46:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:19:32.446 21:46:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:32.446 21:46:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@905 -- # local i 00:19:32.446 21:46:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:32.446 21:46:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:32.446 21:46:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:19:32.446 21:46:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.446 21:46:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:32.446 21:46:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.446 21:46:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:32.446 21:46:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.446 21:46:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:32.706 [ 00:19:32.706 { 00:19:32.706 "name": "BaseBdev1", 00:19:32.706 "aliases": [ 00:19:32.706 "67bdbd5d-ae92-4dbf-805a-eac378dff2da" 00:19:32.706 ], 00:19:32.706 "product_name": "Malloc disk", 00:19:32.706 "block_size": 4096, 00:19:32.706 "num_blocks": 8192, 00:19:32.706 "uuid": "67bdbd5d-ae92-4dbf-805a-eac378dff2da", 00:19:32.706 "md_size": 32, 00:19:32.706 "md_interleave": false, 00:19:32.706 "dif_type": 0, 00:19:32.706 "assigned_rate_limits": { 00:19:32.706 "rw_ios_per_sec": 0, 00:19:32.706 "rw_mbytes_per_sec": 0, 00:19:32.706 "r_mbytes_per_sec": 0, 00:19:32.706 "w_mbytes_per_sec": 0 00:19:32.706 }, 00:19:32.706 "claimed": true, 00:19:32.706 "claim_type": "exclusive_write", 00:19:32.706 "zoned": false, 00:19:32.706 "supported_io_types": { 00:19:32.706 "read": true, 00:19:32.706 "write": true, 00:19:32.706 "unmap": true, 00:19:32.706 "flush": true, 00:19:32.706 "reset": true, 00:19:32.706 "nvme_admin": false, 00:19:32.706 "nvme_io": false, 00:19:32.706 "nvme_io_md": false, 00:19:32.706 "write_zeroes": true, 00:19:32.706 "zcopy": true, 00:19:32.706 "get_zone_info": false, 00:19:32.706 "zone_management": false, 00:19:32.706 "zone_append": false, 00:19:32.706 "compare": false, 00:19:32.706 "compare_and_write": false, 00:19:32.706 "abort": true, 00:19:32.706 "seek_hole": false, 00:19:32.706 "seek_data": false, 00:19:32.706 "copy": true, 00:19:32.706 "nvme_iov_md": false 00:19:32.706 }, 00:19:32.706 "memory_domains": [ 00:19:32.706 { 00:19:32.706 "dma_device_id": "system", 00:19:32.706 "dma_device_type": 1 00:19:32.706 }, 00:19:32.706 { 00:19:32.706 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:32.706 "dma_device_type": 2 00:19:32.706 } 00:19:32.706 ], 00:19:32.706 "driver_specific": {} 00:19:32.706 } 00:19:32.706 ] 00:19:32.706 21:46:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.706 21:46:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@911 -- # return 0 00:19:32.706 21:46:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:19:32.706 21:46:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:32.706 21:46:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:32.706 21:46:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:32.706 21:46:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:32.706 21:46:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:32.706 21:46:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:32.706 21:46:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:32.706 21:46:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:32.706 21:46:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:32.706 21:46:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:32.706 21:46:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:32.706 21:46:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.706 21:46:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:32.706 21:46:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.706 21:46:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:32.706 "name": "Existed_Raid", 00:19:32.706 "uuid": "d587a5c1-9de1-4e7e-a8b6-cc060dd443fc", 00:19:32.706 "strip_size_kb": 0, 00:19:32.706 "state": "configuring", 00:19:32.706 "raid_level": "raid1", 00:19:32.706 "superblock": true, 00:19:32.706 "num_base_bdevs": 2, 00:19:32.706 "num_base_bdevs_discovered": 1, 00:19:32.706 "num_base_bdevs_operational": 2, 00:19:32.706 "base_bdevs_list": [ 00:19:32.706 { 00:19:32.706 "name": "BaseBdev1", 00:19:32.706 "uuid": "67bdbd5d-ae92-4dbf-805a-eac378dff2da", 00:19:32.706 "is_configured": true, 00:19:32.706 "data_offset": 256, 00:19:32.706 "data_size": 7936 00:19:32.706 }, 00:19:32.706 { 00:19:32.706 "name": "BaseBdev2", 00:19:32.706 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:32.706 "is_configured": false, 00:19:32.706 "data_offset": 0, 00:19:32.706 "data_size": 0 00:19:32.706 } 00:19:32.706 ] 00:19:32.706 }' 00:19:32.706 21:46:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:32.706 21:46:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:32.966 21:46:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:19:32.966 21:46:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.966 21:46:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:32.966 [2024-12-10 21:46:33.684836] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:32.966 [2024-12-10 21:46:33.684950] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:19:32.966 21:46:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.966 21:46:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:19:32.966 21:46:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.966 21:46:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:32.966 [2024-12-10 21:46:33.696846] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:32.966 [2024-12-10 21:46:33.698606] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:32.966 [2024-12-10 21:46:33.698649] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:32.966 21:46:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.966 21:46:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:19:32.966 21:46:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:19:32.966 21:46:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:19:32.966 21:46:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:32.966 21:46:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:32.966 21:46:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:32.966 21:46:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:32.966 21:46:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:32.966 21:46:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:32.966 21:46:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:32.966 21:46:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:32.966 21:46:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:32.967 21:46:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:32.967 21:46:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.967 21:46:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:32.967 21:46:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:32.967 21:46:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.227 21:46:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:33.227 "name": "Existed_Raid", 00:19:33.227 "uuid": "e476cddb-0a87-4c46-b199-0c4433185535", 00:19:33.227 "strip_size_kb": 0, 00:19:33.227 "state": "configuring", 00:19:33.227 "raid_level": "raid1", 00:19:33.227 "superblock": true, 00:19:33.227 "num_base_bdevs": 2, 00:19:33.227 "num_base_bdevs_discovered": 1, 00:19:33.227 "num_base_bdevs_operational": 2, 00:19:33.227 "base_bdevs_list": [ 00:19:33.227 { 00:19:33.227 "name": "BaseBdev1", 00:19:33.227 "uuid": "67bdbd5d-ae92-4dbf-805a-eac378dff2da", 00:19:33.227 "is_configured": true, 00:19:33.227 "data_offset": 256, 00:19:33.227 "data_size": 7936 00:19:33.227 }, 00:19:33.227 { 00:19:33.227 "name": "BaseBdev2", 00:19:33.227 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:33.227 "is_configured": false, 00:19:33.227 "data_offset": 0, 00:19:33.227 "data_size": 0 00:19:33.227 } 00:19:33.227 ] 00:19:33.227 }' 00:19:33.227 21:46:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:33.227 21:46:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:33.486 21:46:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2 00:19:33.486 21:46:34 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.486 21:46:34 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:33.486 [2024-12-10 21:46:34.188133] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:33.486 [2024-12-10 21:46:34.188558] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:19:33.486 [2024-12-10 21:46:34.188616] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:19:33.486 [2024-12-10 21:46:34.188731] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:19:33.486 [2024-12-10 21:46:34.188915] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:19:33.486 [2024-12-10 21:46:34.188961] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:19:33.486 [2024-12-10 21:46:34.189091] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:33.486 BaseBdev2 00:19:33.486 21:46:34 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.487 21:46:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:19:33.487 21:46:34 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:19:33.487 21:46:34 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:33.487 21:46:34 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@905 -- # local i 00:19:33.487 21:46:34 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:33.487 21:46:34 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:33.487 21:46:34 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:19:33.487 21:46:34 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.487 21:46:34 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:33.487 21:46:34 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.487 21:46:34 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:33.487 21:46:34 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.487 21:46:34 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:33.487 [ 00:19:33.487 { 00:19:33.487 "name": "BaseBdev2", 00:19:33.487 "aliases": [ 00:19:33.487 "b28630bf-bf41-4018-8775-b829ac390bb4" 00:19:33.487 ], 00:19:33.487 "product_name": "Malloc disk", 00:19:33.487 "block_size": 4096, 00:19:33.487 "num_blocks": 8192, 00:19:33.487 "uuid": "b28630bf-bf41-4018-8775-b829ac390bb4", 00:19:33.487 "md_size": 32, 00:19:33.487 "md_interleave": false, 00:19:33.487 "dif_type": 0, 00:19:33.487 "assigned_rate_limits": { 00:19:33.487 "rw_ios_per_sec": 0, 00:19:33.487 "rw_mbytes_per_sec": 0, 00:19:33.487 "r_mbytes_per_sec": 0, 00:19:33.487 "w_mbytes_per_sec": 0 00:19:33.487 }, 00:19:33.487 "claimed": true, 00:19:33.487 "claim_type": "exclusive_write", 00:19:33.487 "zoned": false, 00:19:33.487 "supported_io_types": { 00:19:33.487 "read": true, 00:19:33.487 "write": true, 00:19:33.487 "unmap": true, 00:19:33.487 "flush": true, 00:19:33.487 "reset": true, 00:19:33.487 "nvme_admin": false, 00:19:33.487 "nvme_io": false, 00:19:33.487 "nvme_io_md": false, 00:19:33.487 "write_zeroes": true, 00:19:33.487 "zcopy": true, 00:19:33.487 "get_zone_info": false, 00:19:33.487 "zone_management": false, 00:19:33.487 "zone_append": false, 00:19:33.487 "compare": false, 00:19:33.487 "compare_and_write": false, 00:19:33.487 "abort": true, 00:19:33.487 "seek_hole": false, 00:19:33.487 "seek_data": false, 00:19:33.487 "copy": true, 00:19:33.487 "nvme_iov_md": false 00:19:33.487 }, 00:19:33.487 "memory_domains": [ 00:19:33.487 { 00:19:33.487 "dma_device_id": "system", 00:19:33.487 "dma_device_type": 1 00:19:33.487 }, 00:19:33.487 { 00:19:33.487 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:33.487 "dma_device_type": 2 00:19:33.487 } 00:19:33.487 ], 00:19:33.487 "driver_specific": {} 00:19:33.487 } 00:19:33.487 ] 00:19:33.487 21:46:34 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.487 21:46:34 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@911 -- # return 0 00:19:33.487 21:46:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:19:33.487 21:46:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:19:33.487 21:46:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:19:33.487 21:46:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:33.487 21:46:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:33.487 21:46:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:33.487 21:46:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:33.487 21:46:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:33.487 21:46:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:33.487 21:46:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:33.487 21:46:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:33.487 21:46:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:33.487 21:46:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:33.487 21:46:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:33.487 21:46:34 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.487 21:46:34 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:33.487 21:46:34 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.487 21:46:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:33.487 "name": "Existed_Raid", 00:19:33.487 "uuid": "e476cddb-0a87-4c46-b199-0c4433185535", 00:19:33.487 "strip_size_kb": 0, 00:19:33.487 "state": "online", 00:19:33.487 "raid_level": "raid1", 00:19:33.487 "superblock": true, 00:19:33.487 "num_base_bdevs": 2, 00:19:33.487 "num_base_bdevs_discovered": 2, 00:19:33.487 "num_base_bdevs_operational": 2, 00:19:33.487 "base_bdevs_list": [ 00:19:33.487 { 00:19:33.487 "name": "BaseBdev1", 00:19:33.487 "uuid": "67bdbd5d-ae92-4dbf-805a-eac378dff2da", 00:19:33.487 "is_configured": true, 00:19:33.487 "data_offset": 256, 00:19:33.487 "data_size": 7936 00:19:33.487 }, 00:19:33.487 { 00:19:33.487 "name": "BaseBdev2", 00:19:33.487 "uuid": "b28630bf-bf41-4018-8775-b829ac390bb4", 00:19:33.487 "is_configured": true, 00:19:33.487 "data_offset": 256, 00:19:33.487 "data_size": 7936 00:19:33.487 } 00:19:33.487 ] 00:19:33.487 }' 00:19:33.487 21:46:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:33.487 21:46:34 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:34.057 21:46:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:19:34.057 21:46:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:19:34.057 21:46:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:34.057 21:46:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:34.057 21:46:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:19:34.057 21:46:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:34.057 21:46:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:19:34.057 21:46:34 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.057 21:46:34 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:34.057 21:46:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:34.057 [2024-12-10 21:46:34.659740] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:34.057 21:46:34 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.057 21:46:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:34.057 "name": "Existed_Raid", 00:19:34.057 "aliases": [ 00:19:34.057 "e476cddb-0a87-4c46-b199-0c4433185535" 00:19:34.057 ], 00:19:34.057 "product_name": "Raid Volume", 00:19:34.057 "block_size": 4096, 00:19:34.057 "num_blocks": 7936, 00:19:34.057 "uuid": "e476cddb-0a87-4c46-b199-0c4433185535", 00:19:34.057 "md_size": 32, 00:19:34.057 "md_interleave": false, 00:19:34.057 "dif_type": 0, 00:19:34.057 "assigned_rate_limits": { 00:19:34.057 "rw_ios_per_sec": 0, 00:19:34.057 "rw_mbytes_per_sec": 0, 00:19:34.057 "r_mbytes_per_sec": 0, 00:19:34.057 "w_mbytes_per_sec": 0 00:19:34.057 }, 00:19:34.057 "claimed": false, 00:19:34.057 "zoned": false, 00:19:34.057 "supported_io_types": { 00:19:34.057 "read": true, 00:19:34.057 "write": true, 00:19:34.057 "unmap": false, 00:19:34.057 "flush": false, 00:19:34.057 "reset": true, 00:19:34.057 "nvme_admin": false, 00:19:34.057 "nvme_io": false, 00:19:34.057 "nvme_io_md": false, 00:19:34.057 "write_zeroes": true, 00:19:34.057 "zcopy": false, 00:19:34.057 "get_zone_info": false, 00:19:34.057 "zone_management": false, 00:19:34.057 "zone_append": false, 00:19:34.057 "compare": false, 00:19:34.057 "compare_and_write": false, 00:19:34.057 "abort": false, 00:19:34.057 "seek_hole": false, 00:19:34.057 "seek_data": false, 00:19:34.057 "copy": false, 00:19:34.057 "nvme_iov_md": false 00:19:34.057 }, 00:19:34.057 "memory_domains": [ 00:19:34.057 { 00:19:34.057 "dma_device_id": "system", 00:19:34.057 "dma_device_type": 1 00:19:34.057 }, 00:19:34.057 { 00:19:34.058 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:34.058 "dma_device_type": 2 00:19:34.058 }, 00:19:34.058 { 00:19:34.058 "dma_device_id": "system", 00:19:34.058 "dma_device_type": 1 00:19:34.058 }, 00:19:34.058 { 00:19:34.058 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:34.058 "dma_device_type": 2 00:19:34.058 } 00:19:34.058 ], 00:19:34.058 "driver_specific": { 00:19:34.058 "raid": { 00:19:34.058 "uuid": "e476cddb-0a87-4c46-b199-0c4433185535", 00:19:34.058 "strip_size_kb": 0, 00:19:34.058 "state": "online", 00:19:34.058 "raid_level": "raid1", 00:19:34.058 "superblock": true, 00:19:34.058 "num_base_bdevs": 2, 00:19:34.058 "num_base_bdevs_discovered": 2, 00:19:34.058 "num_base_bdevs_operational": 2, 00:19:34.058 "base_bdevs_list": [ 00:19:34.058 { 00:19:34.058 "name": "BaseBdev1", 00:19:34.058 "uuid": "67bdbd5d-ae92-4dbf-805a-eac378dff2da", 00:19:34.058 "is_configured": true, 00:19:34.058 "data_offset": 256, 00:19:34.058 "data_size": 7936 00:19:34.058 }, 00:19:34.058 { 00:19:34.058 "name": "BaseBdev2", 00:19:34.058 "uuid": "b28630bf-bf41-4018-8775-b829ac390bb4", 00:19:34.058 "is_configured": true, 00:19:34.058 "data_offset": 256, 00:19:34.058 "data_size": 7936 00:19:34.058 } 00:19:34.058 ] 00:19:34.058 } 00:19:34.058 } 00:19:34.058 }' 00:19:34.058 21:46:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:34.058 21:46:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:19:34.058 BaseBdev2' 00:19:34.058 21:46:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:34.058 21:46:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:19:34.058 21:46:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:34.058 21:46:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:19:34.058 21:46:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:34.058 21:46:34 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.058 21:46:34 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:34.058 21:46:34 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.058 21:46:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:19:34.058 21:46:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:19:34.058 21:46:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:34.058 21:46:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:34.058 21:46:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:19:34.058 21:46:34 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.058 21:46:34 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:34.058 21:46:34 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.318 21:46:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:19:34.318 21:46:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:19:34.318 21:46:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:19:34.318 21:46:34 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.318 21:46:34 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:34.318 [2024-12-10 21:46:34.867084] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:34.318 21:46:34 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.318 21:46:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@260 -- # local expected_state 00:19:34.318 21:46:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:19:34.318 21:46:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:19:34.318 21:46:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:19:34.318 21:46:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:19:34.318 21:46:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:19:34.318 21:46:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:34.318 21:46:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:34.318 21:46:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:34.318 21:46:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:34.318 21:46:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:34.318 21:46:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:34.318 21:46:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:34.318 21:46:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:34.318 21:46:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:34.318 21:46:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:34.318 21:46:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:34.318 21:46:34 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.318 21:46:34 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:34.318 21:46:34 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.318 21:46:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:34.318 "name": "Existed_Raid", 00:19:34.318 "uuid": "e476cddb-0a87-4c46-b199-0c4433185535", 00:19:34.318 "strip_size_kb": 0, 00:19:34.318 "state": "online", 00:19:34.318 "raid_level": "raid1", 00:19:34.318 "superblock": true, 00:19:34.318 "num_base_bdevs": 2, 00:19:34.318 "num_base_bdevs_discovered": 1, 00:19:34.318 "num_base_bdevs_operational": 1, 00:19:34.318 "base_bdevs_list": [ 00:19:34.318 { 00:19:34.318 "name": null, 00:19:34.318 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:34.318 "is_configured": false, 00:19:34.318 "data_offset": 0, 00:19:34.318 "data_size": 7936 00:19:34.318 }, 00:19:34.318 { 00:19:34.318 "name": "BaseBdev2", 00:19:34.319 "uuid": "b28630bf-bf41-4018-8775-b829ac390bb4", 00:19:34.319 "is_configured": true, 00:19:34.319 "data_offset": 256, 00:19:34.319 "data_size": 7936 00:19:34.319 } 00:19:34.319 ] 00:19:34.319 }' 00:19:34.319 21:46:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:34.319 21:46:35 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:34.887 21:46:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:19:34.887 21:46:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:34.887 21:46:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:34.887 21:46:35 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.887 21:46:35 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:34.887 21:46:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:19:34.887 21:46:35 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.887 21:46:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:19:34.887 21:46:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:34.887 21:46:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:19:34.887 21:46:35 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.887 21:46:35 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:34.887 [2024-12-10 21:46:35.463374] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:34.887 [2024-12-10 21:46:35.463488] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:34.887 [2024-12-10 21:46:35.567126] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:34.887 [2024-12-10 21:46:35.567257] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:34.887 [2024-12-10 21:46:35.567298] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:19:34.887 21:46:35 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.887 21:46:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:19:34.887 21:46:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:34.887 21:46:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:19:34.887 21:46:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:34.887 21:46:35 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.887 21:46:35 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:34.887 21:46:35 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.887 21:46:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:19:34.887 21:46:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:19:34.887 21:46:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:19:34.887 21:46:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@326 -- # killprocess 87354 00:19:34.887 21:46:35 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@954 -- # '[' -z 87354 ']' 00:19:34.887 21:46:35 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@958 -- # kill -0 87354 00:19:34.887 21:46:35 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@959 -- # uname 00:19:34.887 21:46:35 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:34.887 21:46:35 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87354 00:19:34.887 killing process with pid 87354 00:19:34.887 21:46:35 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:34.888 21:46:35 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:34.888 21:46:35 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87354' 00:19:34.888 21:46:35 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@973 -- # kill 87354 00:19:34.888 [2024-12-10 21:46:35.648084] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:34.888 21:46:35 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@978 -- # wait 87354 00:19:34.888 [2024-12-10 21:46:35.665200] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:36.268 ************************************ 00:19:36.268 END TEST raid_state_function_test_sb_md_separate 00:19:36.268 ************************************ 00:19:36.268 21:46:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@328 -- # return 0 00:19:36.268 00:19:36.268 real 0m4.992s 00:19:36.268 user 0m7.162s 00:19:36.268 sys 0m0.821s 00:19:36.268 21:46:36 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:36.268 21:46:36 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:36.268 21:46:36 bdev_raid -- bdev/bdev_raid.sh@1005 -- # run_test raid_superblock_test_md_separate raid_superblock_test raid1 2 00:19:36.268 21:46:36 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:19:36.268 21:46:36 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:36.268 21:46:36 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:36.268 ************************************ 00:19:36.268 START TEST raid_superblock_test_md_separate 00:19:36.268 ************************************ 00:19:36.268 21:46:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:19:36.268 21:46:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:19:36.268 21:46:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:19:36.268 21:46:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:19:36.268 21:46:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:19:36.268 21:46:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:19:36.268 21:46:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:19:36.268 21:46:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:19:36.268 21:46:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:19:36.268 21:46:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:19:36.268 21:46:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@399 -- # local strip_size 00:19:36.268 21:46:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:19:36.268 21:46:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:19:36.268 21:46:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:19:36.268 21:46:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:19:36.268 21:46:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:19:36.268 21:46:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@412 -- # raid_pid=87605 00:19:36.268 21:46:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:19:36.268 21:46:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@413 -- # waitforlisten 87605 00:19:36.268 21:46:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@835 -- # '[' -z 87605 ']' 00:19:36.268 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:36.268 21:46:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:36.268 21:46:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:36.268 21:46:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:36.268 21:46:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:36.268 21:46:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:36.268 [2024-12-10 21:46:36.931350] Starting SPDK v25.01-pre git sha1 cec5ba284 / DPDK 24.03.0 initialization... 00:19:36.268 [2024-12-10 21:46:36.931485] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87605 ] 00:19:36.527 [2024-12-10 21:46:37.104616] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:36.527 [2024-12-10 21:46:37.216059] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:19:36.785 [2024-12-10 21:46:37.402046] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:36.785 [2024-12-10 21:46:37.402081] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:37.044 21:46:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:37.044 21:46:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@868 -- # return 0 00:19:37.044 21:46:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:19:37.044 21:46:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:37.044 21:46:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:19:37.044 21:46:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:19:37.044 21:46:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:19:37.044 21:46:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:37.044 21:46:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:19:37.044 21:46:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:37.044 21:46:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc1 00:19:37.044 21:46:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.044 21:46:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:37.044 malloc1 00:19:37.044 21:46:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.044 21:46:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:37.044 21:46:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.044 21:46:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:37.044 [2024-12-10 21:46:37.809894] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:37.044 [2024-12-10 21:46:37.810006] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:37.044 [2024-12-10 21:46:37.810042] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:19:37.044 [2024-12-10 21:46:37.810070] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:37.044 [2024-12-10 21:46:37.811986] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:37.044 [2024-12-10 21:46:37.812081] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:37.044 pt1 00:19:37.044 21:46:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.044 21:46:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:19:37.044 21:46:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:37.044 21:46:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:19:37.044 21:46:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:19:37.044 21:46:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:19:37.044 21:46:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:37.044 21:46:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:19:37.044 21:46:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:37.044 21:46:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc2 00:19:37.044 21:46:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.044 21:46:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:37.303 malloc2 00:19:37.303 21:46:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.303 21:46:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:37.303 21:46:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.303 21:46:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:37.303 [2024-12-10 21:46:37.863829] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:37.303 [2024-12-10 21:46:37.863939] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:37.303 [2024-12-10 21:46:37.863975] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:19:37.303 [2024-12-10 21:46:37.864003] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:37.303 [2024-12-10 21:46:37.865799] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:37.303 [2024-12-10 21:46:37.865868] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:37.303 pt2 00:19:37.303 21:46:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.303 21:46:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:19:37.303 21:46:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:37.303 21:46:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:19:37.303 21:46:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.303 21:46:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:37.303 [2024-12-10 21:46:37.875832] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:37.303 [2024-12-10 21:46:37.877630] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:37.303 [2024-12-10 21:46:37.877805] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:19:37.303 [2024-12-10 21:46:37.877820] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:19:37.303 [2024-12-10 21:46:37.877890] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:19:37.303 [2024-12-10 21:46:37.878009] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:19:37.303 [2024-12-10 21:46:37.878019] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:19:37.303 [2024-12-10 21:46:37.878117] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:37.304 21:46:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.304 21:46:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:37.304 21:46:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:37.304 21:46:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:37.304 21:46:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:37.304 21:46:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:37.304 21:46:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:37.304 21:46:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:37.304 21:46:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:37.304 21:46:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:37.304 21:46:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:37.304 21:46:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:37.304 21:46:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.304 21:46:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:37.304 21:46:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:37.304 21:46:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.304 21:46:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:37.304 "name": "raid_bdev1", 00:19:37.304 "uuid": "8042a25e-cd7f-4845-bf9a-e272c3f03324", 00:19:37.304 "strip_size_kb": 0, 00:19:37.304 "state": "online", 00:19:37.304 "raid_level": "raid1", 00:19:37.304 "superblock": true, 00:19:37.304 "num_base_bdevs": 2, 00:19:37.304 "num_base_bdevs_discovered": 2, 00:19:37.304 "num_base_bdevs_operational": 2, 00:19:37.304 "base_bdevs_list": [ 00:19:37.304 { 00:19:37.304 "name": "pt1", 00:19:37.304 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:37.304 "is_configured": true, 00:19:37.304 "data_offset": 256, 00:19:37.304 "data_size": 7936 00:19:37.304 }, 00:19:37.304 { 00:19:37.304 "name": "pt2", 00:19:37.304 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:37.304 "is_configured": true, 00:19:37.304 "data_offset": 256, 00:19:37.304 "data_size": 7936 00:19:37.304 } 00:19:37.304 ] 00:19:37.304 }' 00:19:37.304 21:46:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:37.304 21:46:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:37.872 21:46:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:19:37.872 21:46:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:19:37.872 21:46:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:37.872 21:46:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:37.872 21:46:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:19:37.872 21:46:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:37.872 21:46:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:37.872 21:46:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:37.872 21:46:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.872 21:46:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:37.872 [2024-12-10 21:46:38.375261] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:37.872 21:46:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.872 21:46:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:37.872 "name": "raid_bdev1", 00:19:37.872 "aliases": [ 00:19:37.872 "8042a25e-cd7f-4845-bf9a-e272c3f03324" 00:19:37.872 ], 00:19:37.872 "product_name": "Raid Volume", 00:19:37.872 "block_size": 4096, 00:19:37.872 "num_blocks": 7936, 00:19:37.872 "uuid": "8042a25e-cd7f-4845-bf9a-e272c3f03324", 00:19:37.872 "md_size": 32, 00:19:37.872 "md_interleave": false, 00:19:37.872 "dif_type": 0, 00:19:37.872 "assigned_rate_limits": { 00:19:37.872 "rw_ios_per_sec": 0, 00:19:37.872 "rw_mbytes_per_sec": 0, 00:19:37.872 "r_mbytes_per_sec": 0, 00:19:37.872 "w_mbytes_per_sec": 0 00:19:37.872 }, 00:19:37.872 "claimed": false, 00:19:37.872 "zoned": false, 00:19:37.872 "supported_io_types": { 00:19:37.872 "read": true, 00:19:37.872 "write": true, 00:19:37.872 "unmap": false, 00:19:37.872 "flush": false, 00:19:37.872 "reset": true, 00:19:37.872 "nvme_admin": false, 00:19:37.872 "nvme_io": false, 00:19:37.872 "nvme_io_md": false, 00:19:37.872 "write_zeroes": true, 00:19:37.872 "zcopy": false, 00:19:37.872 "get_zone_info": false, 00:19:37.872 "zone_management": false, 00:19:37.872 "zone_append": false, 00:19:37.872 "compare": false, 00:19:37.872 "compare_and_write": false, 00:19:37.872 "abort": false, 00:19:37.872 "seek_hole": false, 00:19:37.872 "seek_data": false, 00:19:37.872 "copy": false, 00:19:37.872 "nvme_iov_md": false 00:19:37.872 }, 00:19:37.872 "memory_domains": [ 00:19:37.872 { 00:19:37.872 "dma_device_id": "system", 00:19:37.872 "dma_device_type": 1 00:19:37.872 }, 00:19:37.872 { 00:19:37.872 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:37.872 "dma_device_type": 2 00:19:37.872 }, 00:19:37.872 { 00:19:37.872 "dma_device_id": "system", 00:19:37.872 "dma_device_type": 1 00:19:37.872 }, 00:19:37.872 { 00:19:37.872 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:37.872 "dma_device_type": 2 00:19:37.872 } 00:19:37.872 ], 00:19:37.872 "driver_specific": { 00:19:37.872 "raid": { 00:19:37.872 "uuid": "8042a25e-cd7f-4845-bf9a-e272c3f03324", 00:19:37.872 "strip_size_kb": 0, 00:19:37.872 "state": "online", 00:19:37.873 "raid_level": "raid1", 00:19:37.873 "superblock": true, 00:19:37.873 "num_base_bdevs": 2, 00:19:37.873 "num_base_bdevs_discovered": 2, 00:19:37.873 "num_base_bdevs_operational": 2, 00:19:37.873 "base_bdevs_list": [ 00:19:37.873 { 00:19:37.873 "name": "pt1", 00:19:37.873 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:37.873 "is_configured": true, 00:19:37.873 "data_offset": 256, 00:19:37.873 "data_size": 7936 00:19:37.873 }, 00:19:37.873 { 00:19:37.873 "name": "pt2", 00:19:37.873 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:37.873 "is_configured": true, 00:19:37.873 "data_offset": 256, 00:19:37.873 "data_size": 7936 00:19:37.873 } 00:19:37.873 ] 00:19:37.873 } 00:19:37.873 } 00:19:37.873 }' 00:19:37.873 21:46:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:37.873 21:46:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:19:37.873 pt2' 00:19:37.873 21:46:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:37.873 21:46:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:19:37.873 21:46:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:37.873 21:46:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:37.873 21:46:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:19:37.873 21:46:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.873 21:46:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:37.873 21:46:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.873 21:46:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:19:37.873 21:46:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:19:37.873 21:46:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:37.873 21:46:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:37.873 21:46:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:19:37.873 21:46:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.873 21:46:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:37.873 21:46:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.873 21:46:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:19:37.873 21:46:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:19:37.873 21:46:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:37.873 21:46:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.873 21:46:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:37.873 21:46:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:19:37.873 [2024-12-10 21:46:38.610781] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:37.873 21:46:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.873 21:46:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=8042a25e-cd7f-4845-bf9a-e272c3f03324 00:19:37.873 21:46:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@436 -- # '[' -z 8042a25e-cd7f-4845-bf9a-e272c3f03324 ']' 00:19:37.873 21:46:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:38.132 21:46:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.132 21:46:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:38.132 [2024-12-10 21:46:38.658457] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:38.132 [2024-12-10 21:46:38.658534] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:38.132 [2024-12-10 21:46:38.658638] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:38.132 [2024-12-10 21:46:38.658727] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:38.132 [2024-12-10 21:46:38.658773] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:19:38.132 21:46:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.132 21:46:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:38.132 21:46:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.132 21:46:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:38.132 21:46:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:19:38.132 21:46:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.132 21:46:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:19:38.132 21:46:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:19:38.132 21:46:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:19:38.133 21:46:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:19:38.133 21:46:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.133 21:46:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:38.133 21:46:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.133 21:46:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:19:38.133 21:46:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:19:38.133 21:46:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.133 21:46:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:38.133 21:46:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.133 21:46:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:19:38.133 21:46:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:19:38.133 21:46:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.133 21:46:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:38.133 21:46:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.133 21:46:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:19:38.133 21:46:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:19:38.133 21:46:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@652 -- # local es=0 00:19:38.133 21:46:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:19:38.133 21:46:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:19:38.133 21:46:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:38.133 21:46:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:19:38.133 21:46:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:38.133 21:46:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:19:38.133 21:46:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.133 21:46:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:38.133 [2024-12-10 21:46:38.798217] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:19:38.133 [2024-12-10 21:46:38.800089] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:19:38.133 [2024-12-10 21:46:38.800173] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:19:38.133 [2024-12-10 21:46:38.800224] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:19:38.133 [2024-12-10 21:46:38.800238] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:38.133 [2024-12-10 21:46:38.800248] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:19:38.133 request: 00:19:38.133 { 00:19:38.133 "name": "raid_bdev1", 00:19:38.133 "raid_level": "raid1", 00:19:38.133 "base_bdevs": [ 00:19:38.133 "malloc1", 00:19:38.133 "malloc2" 00:19:38.133 ], 00:19:38.133 "superblock": false, 00:19:38.133 "method": "bdev_raid_create", 00:19:38.133 "req_id": 1 00:19:38.133 } 00:19:38.133 Got JSON-RPC error response 00:19:38.133 response: 00:19:38.133 { 00:19:38.133 "code": -17, 00:19:38.133 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:19:38.133 } 00:19:38.133 21:46:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:19:38.133 21:46:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@655 -- # es=1 00:19:38.133 21:46:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:38.133 21:46:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:38.133 21:46:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:38.133 21:46:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:38.133 21:46:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.133 21:46:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:38.133 21:46:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:19:38.133 21:46:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.133 21:46:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:19:38.133 21:46:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:19:38.133 21:46:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:38.133 21:46:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.133 21:46:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:38.133 [2024-12-10 21:46:38.862090] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:38.133 [2024-12-10 21:46:38.862198] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:38.133 [2024-12-10 21:46:38.862231] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:19:38.133 [2024-12-10 21:46:38.862261] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:38.133 [2024-12-10 21:46:38.864186] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:38.133 [2024-12-10 21:46:38.864279] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:38.133 [2024-12-10 21:46:38.864346] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:19:38.133 [2024-12-10 21:46:38.864423] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:38.133 pt1 00:19:38.133 21:46:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.133 21:46:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:19:38.133 21:46:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:38.133 21:46:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:38.133 21:46:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:38.133 21:46:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:38.133 21:46:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:38.133 21:46:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:38.133 21:46:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:38.133 21:46:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:38.133 21:46:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:38.133 21:46:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:38.133 21:46:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.133 21:46:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:38.133 21:46:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:38.133 21:46:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.392 21:46:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:38.392 "name": "raid_bdev1", 00:19:38.392 "uuid": "8042a25e-cd7f-4845-bf9a-e272c3f03324", 00:19:38.392 "strip_size_kb": 0, 00:19:38.392 "state": "configuring", 00:19:38.392 "raid_level": "raid1", 00:19:38.392 "superblock": true, 00:19:38.392 "num_base_bdevs": 2, 00:19:38.392 "num_base_bdevs_discovered": 1, 00:19:38.392 "num_base_bdevs_operational": 2, 00:19:38.393 "base_bdevs_list": [ 00:19:38.393 { 00:19:38.393 "name": "pt1", 00:19:38.393 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:38.393 "is_configured": true, 00:19:38.393 "data_offset": 256, 00:19:38.393 "data_size": 7936 00:19:38.393 }, 00:19:38.393 { 00:19:38.393 "name": null, 00:19:38.393 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:38.393 "is_configured": false, 00:19:38.393 "data_offset": 256, 00:19:38.393 "data_size": 7936 00:19:38.393 } 00:19:38.393 ] 00:19:38.393 }' 00:19:38.393 21:46:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:38.393 21:46:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:38.653 21:46:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:19:38.653 21:46:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:19:38.653 21:46:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:19:38.653 21:46:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:38.653 21:46:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.653 21:46:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:38.653 [2024-12-10 21:46:39.301403] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:38.653 [2024-12-10 21:46:39.301496] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:38.653 [2024-12-10 21:46:39.301520] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:19:38.653 [2024-12-10 21:46:39.301535] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:38.653 [2024-12-10 21:46:39.301789] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:38.653 [2024-12-10 21:46:39.301805] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:38.653 [2024-12-10 21:46:39.301856] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:19:38.653 [2024-12-10 21:46:39.301877] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:38.653 [2024-12-10 21:46:39.301985] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:19:38.653 [2024-12-10 21:46:39.301995] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:19:38.653 [2024-12-10 21:46:39.302067] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:19:38.653 [2024-12-10 21:46:39.302170] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:19:38.653 [2024-12-10 21:46:39.302177] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:19:38.653 [2024-12-10 21:46:39.302265] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:38.653 pt2 00:19:38.653 21:46:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.653 21:46:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:19:38.653 21:46:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:19:38.653 21:46:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:38.653 21:46:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:38.653 21:46:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:38.653 21:46:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:38.653 21:46:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:38.653 21:46:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:38.653 21:46:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:38.653 21:46:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:38.653 21:46:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:38.653 21:46:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:38.653 21:46:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:38.653 21:46:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:38.653 21:46:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.653 21:46:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:38.653 21:46:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.653 21:46:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:38.653 "name": "raid_bdev1", 00:19:38.653 "uuid": "8042a25e-cd7f-4845-bf9a-e272c3f03324", 00:19:38.653 "strip_size_kb": 0, 00:19:38.653 "state": "online", 00:19:38.653 "raid_level": "raid1", 00:19:38.653 "superblock": true, 00:19:38.653 "num_base_bdevs": 2, 00:19:38.653 "num_base_bdevs_discovered": 2, 00:19:38.653 "num_base_bdevs_operational": 2, 00:19:38.653 "base_bdevs_list": [ 00:19:38.653 { 00:19:38.653 "name": "pt1", 00:19:38.653 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:38.653 "is_configured": true, 00:19:38.653 "data_offset": 256, 00:19:38.653 "data_size": 7936 00:19:38.653 }, 00:19:38.653 { 00:19:38.653 "name": "pt2", 00:19:38.653 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:38.653 "is_configured": true, 00:19:38.653 "data_offset": 256, 00:19:38.653 "data_size": 7936 00:19:38.653 } 00:19:38.653 ] 00:19:38.653 }' 00:19:38.653 21:46:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:38.653 21:46:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:39.220 21:46:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:19:39.220 21:46:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:19:39.220 21:46:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:39.220 21:46:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:39.220 21:46:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:19:39.220 21:46:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:39.220 21:46:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:39.220 21:46:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:39.220 21:46:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.220 21:46:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:39.220 [2024-12-10 21:46:39.736894] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:39.220 21:46:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.220 21:46:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:39.220 "name": "raid_bdev1", 00:19:39.220 "aliases": [ 00:19:39.220 "8042a25e-cd7f-4845-bf9a-e272c3f03324" 00:19:39.220 ], 00:19:39.221 "product_name": "Raid Volume", 00:19:39.221 "block_size": 4096, 00:19:39.221 "num_blocks": 7936, 00:19:39.221 "uuid": "8042a25e-cd7f-4845-bf9a-e272c3f03324", 00:19:39.221 "md_size": 32, 00:19:39.221 "md_interleave": false, 00:19:39.221 "dif_type": 0, 00:19:39.221 "assigned_rate_limits": { 00:19:39.221 "rw_ios_per_sec": 0, 00:19:39.221 "rw_mbytes_per_sec": 0, 00:19:39.221 "r_mbytes_per_sec": 0, 00:19:39.221 "w_mbytes_per_sec": 0 00:19:39.221 }, 00:19:39.221 "claimed": false, 00:19:39.221 "zoned": false, 00:19:39.221 "supported_io_types": { 00:19:39.221 "read": true, 00:19:39.221 "write": true, 00:19:39.221 "unmap": false, 00:19:39.221 "flush": false, 00:19:39.221 "reset": true, 00:19:39.221 "nvme_admin": false, 00:19:39.221 "nvme_io": false, 00:19:39.221 "nvme_io_md": false, 00:19:39.221 "write_zeroes": true, 00:19:39.221 "zcopy": false, 00:19:39.221 "get_zone_info": false, 00:19:39.221 "zone_management": false, 00:19:39.221 "zone_append": false, 00:19:39.221 "compare": false, 00:19:39.221 "compare_and_write": false, 00:19:39.221 "abort": false, 00:19:39.221 "seek_hole": false, 00:19:39.221 "seek_data": false, 00:19:39.221 "copy": false, 00:19:39.221 "nvme_iov_md": false 00:19:39.221 }, 00:19:39.221 "memory_domains": [ 00:19:39.221 { 00:19:39.221 "dma_device_id": "system", 00:19:39.221 "dma_device_type": 1 00:19:39.221 }, 00:19:39.221 { 00:19:39.221 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:39.221 "dma_device_type": 2 00:19:39.221 }, 00:19:39.221 { 00:19:39.221 "dma_device_id": "system", 00:19:39.221 "dma_device_type": 1 00:19:39.221 }, 00:19:39.221 { 00:19:39.221 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:39.221 "dma_device_type": 2 00:19:39.221 } 00:19:39.221 ], 00:19:39.221 "driver_specific": { 00:19:39.221 "raid": { 00:19:39.221 "uuid": "8042a25e-cd7f-4845-bf9a-e272c3f03324", 00:19:39.221 "strip_size_kb": 0, 00:19:39.221 "state": "online", 00:19:39.221 "raid_level": "raid1", 00:19:39.221 "superblock": true, 00:19:39.221 "num_base_bdevs": 2, 00:19:39.221 "num_base_bdevs_discovered": 2, 00:19:39.221 "num_base_bdevs_operational": 2, 00:19:39.221 "base_bdevs_list": [ 00:19:39.221 { 00:19:39.221 "name": "pt1", 00:19:39.221 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:39.221 "is_configured": true, 00:19:39.221 "data_offset": 256, 00:19:39.221 "data_size": 7936 00:19:39.221 }, 00:19:39.221 { 00:19:39.221 "name": "pt2", 00:19:39.221 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:39.221 "is_configured": true, 00:19:39.221 "data_offset": 256, 00:19:39.221 "data_size": 7936 00:19:39.221 } 00:19:39.221 ] 00:19:39.221 } 00:19:39.221 } 00:19:39.221 }' 00:19:39.221 21:46:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:39.221 21:46:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:19:39.221 pt2' 00:19:39.221 21:46:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:39.221 21:46:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:19:39.221 21:46:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:39.221 21:46:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:19:39.221 21:46:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.221 21:46:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:39.221 21:46:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:39.221 21:46:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.221 21:46:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:19:39.221 21:46:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:19:39.221 21:46:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:39.221 21:46:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:19:39.221 21:46:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.221 21:46:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:39.221 21:46:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:39.221 21:46:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.221 21:46:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:19:39.221 21:46:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:19:39.221 21:46:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:39.221 21:46:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:19:39.221 21:46:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.221 21:46:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:39.221 [2024-12-10 21:46:39.964539] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:39.221 21:46:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.221 21:46:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # '[' 8042a25e-cd7f-4845-bf9a-e272c3f03324 '!=' 8042a25e-cd7f-4845-bf9a-e272c3f03324 ']' 00:19:39.221 21:46:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:19:39.221 21:46:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:19:39.221 21:46:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:19:39.221 21:46:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:19:39.221 21:46:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.221 21:46:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:39.479 [2024-12-10 21:46:40.004217] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:19:39.479 21:46:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.479 21:46:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:39.479 21:46:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:39.479 21:46:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:39.479 21:46:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:39.479 21:46:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:39.479 21:46:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:39.479 21:46:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:39.479 21:46:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:39.479 21:46:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:39.479 21:46:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:39.479 21:46:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:39.479 21:46:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:39.479 21:46:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.479 21:46:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:39.479 21:46:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.479 21:46:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:39.479 "name": "raid_bdev1", 00:19:39.479 "uuid": "8042a25e-cd7f-4845-bf9a-e272c3f03324", 00:19:39.479 "strip_size_kb": 0, 00:19:39.479 "state": "online", 00:19:39.479 "raid_level": "raid1", 00:19:39.479 "superblock": true, 00:19:39.479 "num_base_bdevs": 2, 00:19:39.479 "num_base_bdevs_discovered": 1, 00:19:39.479 "num_base_bdevs_operational": 1, 00:19:39.479 "base_bdevs_list": [ 00:19:39.479 { 00:19:39.479 "name": null, 00:19:39.479 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:39.479 "is_configured": false, 00:19:39.479 "data_offset": 0, 00:19:39.480 "data_size": 7936 00:19:39.480 }, 00:19:39.480 { 00:19:39.480 "name": "pt2", 00:19:39.480 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:39.480 "is_configured": true, 00:19:39.480 "data_offset": 256, 00:19:39.480 "data_size": 7936 00:19:39.480 } 00:19:39.480 ] 00:19:39.480 }' 00:19:39.480 21:46:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:39.480 21:46:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:39.738 21:46:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:39.739 21:46:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.739 21:46:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:39.739 [2024-12-10 21:46:40.455475] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:39.739 [2024-12-10 21:46:40.455502] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:39.739 [2024-12-10 21:46:40.455580] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:39.739 [2024-12-10 21:46:40.455632] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:39.739 [2024-12-10 21:46:40.455643] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:19:39.739 21:46:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.739 21:46:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:39.739 21:46:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.739 21:46:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:39.739 21:46:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:19:39.739 21:46:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.739 21:46:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:19:39.739 21:46:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:19:39.739 21:46:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:19:39.739 21:46:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:19:39.739 21:46:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:19:39.739 21:46:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.739 21:46:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:40.031 21:46:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.031 21:46:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:19:40.031 21:46:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:19:40.031 21:46:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:19:40.031 21:46:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:19:40.031 21:46:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@519 -- # i=1 00:19:40.031 21:46:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:40.031 21:46:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.031 21:46:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:40.031 [2024-12-10 21:46:40.527306] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:40.031 [2024-12-10 21:46:40.527360] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:40.031 [2024-12-10 21:46:40.527376] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:19:40.031 [2024-12-10 21:46:40.527386] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:40.031 [2024-12-10 21:46:40.529385] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:40.031 [2024-12-10 21:46:40.529477] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:40.031 [2024-12-10 21:46:40.529554] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:19:40.031 [2024-12-10 21:46:40.529634] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:40.031 [2024-12-10 21:46:40.529775] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:19:40.031 [2024-12-10 21:46:40.529812] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:19:40.031 [2024-12-10 21:46:40.529897] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:19:40.031 [2024-12-10 21:46:40.530034] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:19:40.031 [2024-12-10 21:46:40.530069] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:19:40.031 [2024-12-10 21:46:40.530197] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:40.031 pt2 00:19:40.031 21:46:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.031 21:46:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:40.031 21:46:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:40.031 21:46:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:40.031 21:46:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:40.031 21:46:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:40.031 21:46:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:40.031 21:46:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:40.031 21:46:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:40.031 21:46:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:40.031 21:46:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:40.031 21:46:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:40.031 21:46:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.031 21:46:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:40.031 21:46:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:40.031 21:46:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.031 21:46:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:40.031 "name": "raid_bdev1", 00:19:40.031 "uuid": "8042a25e-cd7f-4845-bf9a-e272c3f03324", 00:19:40.031 "strip_size_kb": 0, 00:19:40.031 "state": "online", 00:19:40.031 "raid_level": "raid1", 00:19:40.031 "superblock": true, 00:19:40.031 "num_base_bdevs": 2, 00:19:40.031 "num_base_bdevs_discovered": 1, 00:19:40.031 "num_base_bdevs_operational": 1, 00:19:40.031 "base_bdevs_list": [ 00:19:40.031 { 00:19:40.032 "name": null, 00:19:40.032 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:40.032 "is_configured": false, 00:19:40.032 "data_offset": 256, 00:19:40.032 "data_size": 7936 00:19:40.032 }, 00:19:40.032 { 00:19:40.032 "name": "pt2", 00:19:40.032 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:40.032 "is_configured": true, 00:19:40.032 "data_offset": 256, 00:19:40.032 "data_size": 7936 00:19:40.032 } 00:19:40.032 ] 00:19:40.032 }' 00:19:40.032 21:46:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:40.032 21:46:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:40.308 21:46:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:40.308 21:46:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.308 21:46:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:40.308 [2024-12-10 21:46:40.986556] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:40.308 [2024-12-10 21:46:40.986587] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:40.308 [2024-12-10 21:46:40.986660] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:40.308 [2024-12-10 21:46:40.986709] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:40.308 [2024-12-10 21:46:40.986718] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:19:40.308 21:46:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.308 21:46:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:40.308 21:46:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.308 21:46:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:40.308 21:46:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:19:40.308 21:46:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.308 21:46:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:19:40.308 21:46:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:19:40.308 21:46:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:19:40.308 21:46:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:40.308 21:46:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.308 21:46:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:40.308 [2024-12-10 21:46:41.038566] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:40.308 [2024-12-10 21:46:41.038628] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:40.308 [2024-12-10 21:46:41.038652] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:19:40.308 [2024-12-10 21:46:41.038663] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:40.308 [2024-12-10 21:46:41.040739] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:40.308 [2024-12-10 21:46:41.040776] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:40.308 [2024-12-10 21:46:41.040834] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:19:40.308 [2024-12-10 21:46:41.040890] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:40.308 [2024-12-10 21:46:41.041019] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:19:40.308 [2024-12-10 21:46:41.041028] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:40.308 [2024-12-10 21:46:41.041046] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:19:40.308 [2024-12-10 21:46:41.041108] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:40.308 [2024-12-10 21:46:41.041176] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:19:40.308 [2024-12-10 21:46:41.041184] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:19:40.308 [2024-12-10 21:46:41.041265] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:19:40.308 [2024-12-10 21:46:41.041384] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:19:40.308 [2024-12-10 21:46:41.041394] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:19:40.308 [2024-12-10 21:46:41.041511] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:40.308 pt1 00:19:40.308 21:46:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.308 21:46:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:19:40.308 21:46:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:40.308 21:46:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:40.308 21:46:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:40.308 21:46:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:40.308 21:46:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:40.308 21:46:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:40.308 21:46:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:40.308 21:46:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:40.308 21:46:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:40.308 21:46:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:40.308 21:46:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:40.308 21:46:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:40.308 21:46:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.309 21:46:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:40.309 21:46:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.576 21:46:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:40.576 "name": "raid_bdev1", 00:19:40.576 "uuid": "8042a25e-cd7f-4845-bf9a-e272c3f03324", 00:19:40.576 "strip_size_kb": 0, 00:19:40.576 "state": "online", 00:19:40.576 "raid_level": "raid1", 00:19:40.576 "superblock": true, 00:19:40.576 "num_base_bdevs": 2, 00:19:40.576 "num_base_bdevs_discovered": 1, 00:19:40.576 "num_base_bdevs_operational": 1, 00:19:40.576 "base_bdevs_list": [ 00:19:40.576 { 00:19:40.576 "name": null, 00:19:40.576 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:40.576 "is_configured": false, 00:19:40.576 "data_offset": 256, 00:19:40.576 "data_size": 7936 00:19:40.576 }, 00:19:40.576 { 00:19:40.576 "name": "pt2", 00:19:40.576 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:40.576 "is_configured": true, 00:19:40.576 "data_offset": 256, 00:19:40.576 "data_size": 7936 00:19:40.576 } 00:19:40.576 ] 00:19:40.576 }' 00:19:40.576 21:46:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:40.576 21:46:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:40.836 21:46:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:19:40.836 21:46:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:19:40.836 21:46:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.836 21:46:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:40.836 21:46:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.836 21:46:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:19:40.836 21:46:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:19:40.836 21:46:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:40.836 21:46:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.836 21:46:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:40.836 [2024-12-10 21:46:41.493988] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:40.836 21:46:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.836 21:46:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # '[' 8042a25e-cd7f-4845-bf9a-e272c3f03324 '!=' 8042a25e-cd7f-4845-bf9a-e272c3f03324 ']' 00:19:40.836 21:46:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@563 -- # killprocess 87605 00:19:40.836 21:46:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@954 -- # '[' -z 87605 ']' 00:19:40.836 21:46:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@958 -- # kill -0 87605 00:19:40.836 21:46:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@959 -- # uname 00:19:40.836 21:46:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:40.836 21:46:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87605 00:19:40.836 killing process with pid 87605 00:19:40.836 21:46:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:40.836 21:46:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:40.836 21:46:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87605' 00:19:40.836 21:46:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@973 -- # kill 87605 00:19:40.836 [2024-12-10 21:46:41.551621] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:40.836 21:46:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@978 -- # wait 87605 00:19:40.836 [2024-12-10 21:46:41.551709] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:40.836 [2024-12-10 21:46:41.551787] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:40.836 [2024-12-10 21:46:41.551806] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:19:41.095 [2024-12-10 21:46:41.772707] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:42.475 ************************************ 00:19:42.475 END TEST raid_superblock_test_md_separate 00:19:42.475 ************************************ 00:19:42.475 21:46:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@565 -- # return 0 00:19:42.475 00:19:42.475 real 0m6.036s 00:19:42.475 user 0m9.124s 00:19:42.475 sys 0m1.030s 00:19:42.475 21:46:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:42.475 21:46:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:42.475 21:46:42 bdev_raid -- bdev/bdev_raid.sh@1006 -- # '[' true = true ']' 00:19:42.475 21:46:42 bdev_raid -- bdev/bdev_raid.sh@1007 -- # run_test raid_rebuild_test_sb_md_separate raid_rebuild_test raid1 2 true false true 00:19:42.475 21:46:42 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:19:42.475 21:46:42 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:42.475 21:46:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:42.475 ************************************ 00:19:42.475 START TEST raid_rebuild_test_sb_md_separate 00:19:42.475 ************************************ 00:19:42.475 21:46:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:19:42.475 21:46:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:19:42.475 21:46:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:19:42.475 21:46:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:19:42.475 21:46:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:19:42.475 21:46:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # local verify=true 00:19:42.475 21:46:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:19:42.475 21:46:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:42.475 21:46:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:19:42.475 21:46:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:42.475 21:46:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:42.475 21:46:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:19:42.475 21:46:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:42.475 21:46:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:42.475 21:46:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:19:42.475 21:46:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:19:42.475 21:46:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:19:42.475 21:46:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # local strip_size 00:19:42.475 21:46:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@577 -- # local create_arg 00:19:42.475 21:46:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:19:42.475 21:46:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@579 -- # local data_offset 00:19:42.475 21:46:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:19:42.475 21:46:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:19:42.475 21:46:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:19:42.475 21:46:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:19:42.475 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:42.475 21:46:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@597 -- # raid_pid=87929 00:19:42.475 21:46:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@598 -- # waitforlisten 87929 00:19:42.475 21:46:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@835 -- # '[' -z 87929 ']' 00:19:42.475 21:46:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:42.475 21:46:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:42.475 21:46:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:42.475 21:46:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:42.475 21:46:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:42.475 21:46:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:19:42.475 I/O size of 3145728 is greater than zero copy threshold (65536). 00:19:42.475 Zero copy mechanism will not be used. 00:19:42.475 [2024-12-10 21:46:43.038523] Starting SPDK v25.01-pre git sha1 cec5ba284 / DPDK 24.03.0 initialization... 00:19:42.475 [2024-12-10 21:46:43.038641] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87929 ] 00:19:42.475 [2024-12-10 21:46:43.210678] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:42.734 [2024-12-10 21:46:43.320645] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:19:42.734 [2024-12-10 21:46:43.509153] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:42.734 [2024-12-10 21:46:43.509215] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:43.304 21:46:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:43.304 21:46:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # return 0 00:19:43.304 21:46:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:43.304 21:46:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1_malloc 00:19:43.304 21:46:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.304 21:46:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:43.304 BaseBdev1_malloc 00:19:43.304 21:46:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.304 21:46:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:19:43.304 21:46:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.304 21:46:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:43.304 [2024-12-10 21:46:43.916970] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:19:43.304 [2024-12-10 21:46:43.917088] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:43.304 [2024-12-10 21:46:43.917116] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:19:43.304 [2024-12-10 21:46:43.917127] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:43.304 [2024-12-10 21:46:43.918948] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:43.304 [2024-12-10 21:46:43.918988] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:43.304 BaseBdev1 00:19:43.304 21:46:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.304 21:46:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:43.304 21:46:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2_malloc 00:19:43.304 21:46:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.304 21:46:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:43.304 BaseBdev2_malloc 00:19:43.304 21:46:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.304 21:46:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:19:43.304 21:46:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.304 21:46:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:43.304 [2024-12-10 21:46:43.971338] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:19:43.304 [2024-12-10 21:46:43.971405] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:43.304 [2024-12-10 21:46:43.971445] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:19:43.304 [2024-12-10 21:46:43.971458] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:43.304 [2024-12-10 21:46:43.973324] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:43.304 [2024-12-10 21:46:43.973376] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:19:43.304 BaseBdev2 00:19:43.304 21:46:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.304 21:46:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b spare_malloc 00:19:43.304 21:46:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.304 21:46:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:43.304 spare_malloc 00:19:43.304 21:46:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.304 21:46:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:19:43.304 21:46:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.304 21:46:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:43.304 spare_delay 00:19:43.304 21:46:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.304 21:46:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:43.304 21:46:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.304 21:46:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:43.304 [2024-12-10 21:46:44.041433] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:43.304 [2024-12-10 21:46:44.041483] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:43.304 [2024-12-10 21:46:44.041503] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:19:43.304 [2024-12-10 21:46:44.041513] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:43.304 [2024-12-10 21:46:44.043458] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:43.304 [2024-12-10 21:46:44.043531] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:43.304 spare 00:19:43.304 21:46:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.304 21:46:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:19:43.304 21:46:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.304 21:46:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:43.304 [2024-12-10 21:46:44.049463] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:43.304 [2024-12-10 21:46:44.051206] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:43.304 [2024-12-10 21:46:44.051381] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:19:43.304 [2024-12-10 21:46:44.051397] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:19:43.304 [2024-12-10 21:46:44.051486] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:19:43.304 [2024-12-10 21:46:44.051624] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:19:43.304 [2024-12-10 21:46:44.051635] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:19:43.304 [2024-12-10 21:46:44.051733] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:43.304 21:46:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.304 21:46:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:43.304 21:46:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:43.304 21:46:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:43.304 21:46:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:43.304 21:46:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:43.304 21:46:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:43.304 21:46:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:43.304 21:46:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:43.304 21:46:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:43.304 21:46:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:43.304 21:46:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:43.304 21:46:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:43.304 21:46:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.304 21:46:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:43.304 21:46:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.563 21:46:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:43.563 "name": "raid_bdev1", 00:19:43.563 "uuid": "1fb009ae-3a90-420d-b189-cd4b40152505", 00:19:43.563 "strip_size_kb": 0, 00:19:43.563 "state": "online", 00:19:43.563 "raid_level": "raid1", 00:19:43.563 "superblock": true, 00:19:43.563 "num_base_bdevs": 2, 00:19:43.563 "num_base_bdevs_discovered": 2, 00:19:43.563 "num_base_bdevs_operational": 2, 00:19:43.563 "base_bdevs_list": [ 00:19:43.563 { 00:19:43.563 "name": "BaseBdev1", 00:19:43.563 "uuid": "0478fe79-7256-5f09-8895-6967a12fc0a8", 00:19:43.563 "is_configured": true, 00:19:43.563 "data_offset": 256, 00:19:43.563 "data_size": 7936 00:19:43.563 }, 00:19:43.563 { 00:19:43.563 "name": "BaseBdev2", 00:19:43.563 "uuid": "237d700a-5d7f-5b05-bb16-a154f86ea3fc", 00:19:43.563 "is_configured": true, 00:19:43.563 "data_offset": 256, 00:19:43.563 "data_size": 7936 00:19:43.563 } 00:19:43.563 ] 00:19:43.563 }' 00:19:43.563 21:46:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:43.563 21:46:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:43.821 21:46:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:19:43.821 21:46:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:43.821 21:46:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.821 21:46:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:43.821 [2024-12-10 21:46:44.512970] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:43.821 21:46:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.821 21:46:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:19:43.821 21:46:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:19:43.821 21:46:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:43.821 21:46:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.821 21:46:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:43.821 21:46:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.821 21:46:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:19:43.821 21:46:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:19:43.821 21:46:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:19:43.821 21:46:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:19:43.821 21:46:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:19:43.821 21:46:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:19:43.821 21:46:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:19:43.821 21:46:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:43.821 21:46:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:19:43.821 21:46:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:43.821 21:46:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:19:43.821 21:46:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:43.821 21:46:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:43.821 21:46:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:19:44.079 [2024-12-10 21:46:44.752428] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:19:44.079 /dev/nbd0 00:19:44.079 21:46:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:44.079 21:46:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:44.079 21:46:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:19:44.079 21:46:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:19:44.079 21:46:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:44.079 21:46:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:44.079 21:46:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:19:44.079 21:46:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:19:44.079 21:46:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:44.079 21:46:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:44.079 21:46:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:44.079 1+0 records in 00:19:44.079 1+0 records out 00:19:44.079 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00022581 s, 18.1 MB/s 00:19:44.079 21:46:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:44.079 21:46:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:19:44.079 21:46:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:44.079 21:46:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:44.079 21:46:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:19:44.080 21:46:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:44.080 21:46:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:44.080 21:46:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:19:44.080 21:46:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:19:44.080 21:46:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:19:45.017 7936+0 records in 00:19:45.017 7936+0 records out 00:19:45.017 32505856 bytes (33 MB, 31 MiB) copied, 0.630068 s, 51.6 MB/s 00:19:45.017 21:46:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:19:45.017 21:46:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:19:45.017 21:46:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:45.017 21:46:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:45.017 21:46:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:19:45.017 21:46:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:45.017 21:46:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:19:45.017 21:46:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:45.017 [2024-12-10 21:46:45.668937] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:45.017 21:46:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:45.017 21:46:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:45.017 21:46:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:45.017 21:46:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:45.017 21:46:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:45.017 21:46:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:19:45.017 21:46:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:19:45.017 21:46:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:19:45.017 21:46:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.017 21:46:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:45.017 [2024-12-10 21:46:45.685026] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:45.017 21:46:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.017 21:46:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:45.017 21:46:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:45.017 21:46:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:45.017 21:46:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:45.017 21:46:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:45.017 21:46:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:45.017 21:46:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:45.017 21:46:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:45.017 21:46:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:45.017 21:46:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:45.017 21:46:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:45.017 21:46:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:45.017 21:46:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.017 21:46:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:45.017 21:46:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.018 21:46:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:45.018 "name": "raid_bdev1", 00:19:45.018 "uuid": "1fb009ae-3a90-420d-b189-cd4b40152505", 00:19:45.018 "strip_size_kb": 0, 00:19:45.018 "state": "online", 00:19:45.018 "raid_level": "raid1", 00:19:45.018 "superblock": true, 00:19:45.018 "num_base_bdevs": 2, 00:19:45.018 "num_base_bdevs_discovered": 1, 00:19:45.018 "num_base_bdevs_operational": 1, 00:19:45.018 "base_bdevs_list": [ 00:19:45.018 { 00:19:45.018 "name": null, 00:19:45.018 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:45.018 "is_configured": false, 00:19:45.018 "data_offset": 0, 00:19:45.018 "data_size": 7936 00:19:45.018 }, 00:19:45.018 { 00:19:45.018 "name": "BaseBdev2", 00:19:45.018 "uuid": "237d700a-5d7f-5b05-bb16-a154f86ea3fc", 00:19:45.018 "is_configured": true, 00:19:45.018 "data_offset": 256, 00:19:45.018 "data_size": 7936 00:19:45.018 } 00:19:45.018 ] 00:19:45.018 }' 00:19:45.018 21:46:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:45.018 21:46:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:45.586 21:46:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:45.586 21:46:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.586 21:46:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:45.586 [2024-12-10 21:46:46.148340] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:45.586 [2024-12-10 21:46:46.163179] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d260 00:19:45.586 21:46:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.586 21:46:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@647 -- # sleep 1 00:19:45.586 [2024-12-10 21:46:46.165084] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:46.523 21:46:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:46.523 21:46:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:46.523 21:46:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:46.523 21:46:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:46.524 21:46:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:46.524 21:46:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:46.524 21:46:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.524 21:46:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:46.524 21:46:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:46.524 21:46:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.524 21:46:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:46.524 "name": "raid_bdev1", 00:19:46.524 "uuid": "1fb009ae-3a90-420d-b189-cd4b40152505", 00:19:46.524 "strip_size_kb": 0, 00:19:46.524 "state": "online", 00:19:46.524 "raid_level": "raid1", 00:19:46.524 "superblock": true, 00:19:46.524 "num_base_bdevs": 2, 00:19:46.524 "num_base_bdevs_discovered": 2, 00:19:46.524 "num_base_bdevs_operational": 2, 00:19:46.524 "process": { 00:19:46.524 "type": "rebuild", 00:19:46.524 "target": "spare", 00:19:46.524 "progress": { 00:19:46.524 "blocks": 2560, 00:19:46.524 "percent": 32 00:19:46.524 } 00:19:46.524 }, 00:19:46.524 "base_bdevs_list": [ 00:19:46.524 { 00:19:46.524 "name": "spare", 00:19:46.524 "uuid": "9fab66c6-53e5-5cf5-8e30-3d3000490e02", 00:19:46.524 "is_configured": true, 00:19:46.524 "data_offset": 256, 00:19:46.524 "data_size": 7936 00:19:46.524 }, 00:19:46.524 { 00:19:46.524 "name": "BaseBdev2", 00:19:46.524 "uuid": "237d700a-5d7f-5b05-bb16-a154f86ea3fc", 00:19:46.524 "is_configured": true, 00:19:46.524 "data_offset": 256, 00:19:46.524 "data_size": 7936 00:19:46.524 } 00:19:46.524 ] 00:19:46.524 }' 00:19:46.524 21:46:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:46.524 21:46:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:46.524 21:46:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:46.783 21:46:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:46.783 21:46:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:19:46.783 21:46:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.783 21:46:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:46.783 [2024-12-10 21:46:47.325279] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:46.783 [2024-12-10 21:46:47.370463] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:46.783 [2024-12-10 21:46:47.370534] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:46.783 [2024-12-10 21:46:47.370548] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:46.783 [2024-12-10 21:46:47.370560] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:46.783 21:46:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.783 21:46:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:46.783 21:46:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:46.783 21:46:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:46.783 21:46:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:46.783 21:46:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:46.783 21:46:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:46.783 21:46:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:46.783 21:46:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:46.783 21:46:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:46.783 21:46:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:46.783 21:46:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:46.783 21:46:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.783 21:46:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:46.783 21:46:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:46.783 21:46:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.783 21:46:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:46.783 "name": "raid_bdev1", 00:19:46.783 "uuid": "1fb009ae-3a90-420d-b189-cd4b40152505", 00:19:46.783 "strip_size_kb": 0, 00:19:46.783 "state": "online", 00:19:46.783 "raid_level": "raid1", 00:19:46.783 "superblock": true, 00:19:46.783 "num_base_bdevs": 2, 00:19:46.783 "num_base_bdevs_discovered": 1, 00:19:46.783 "num_base_bdevs_operational": 1, 00:19:46.783 "base_bdevs_list": [ 00:19:46.783 { 00:19:46.783 "name": null, 00:19:46.783 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:46.783 "is_configured": false, 00:19:46.783 "data_offset": 0, 00:19:46.783 "data_size": 7936 00:19:46.783 }, 00:19:46.783 { 00:19:46.783 "name": "BaseBdev2", 00:19:46.783 "uuid": "237d700a-5d7f-5b05-bb16-a154f86ea3fc", 00:19:46.783 "is_configured": true, 00:19:46.783 "data_offset": 256, 00:19:46.783 "data_size": 7936 00:19:46.783 } 00:19:46.783 ] 00:19:46.783 }' 00:19:46.784 21:46:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:46.784 21:46:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:47.043 21:46:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:47.043 21:46:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:47.043 21:46:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:47.043 21:46:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:47.043 21:46:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:47.043 21:46:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:47.043 21:46:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.043 21:46:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:47.043 21:46:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:47.303 21:46:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.303 21:46:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:47.303 "name": "raid_bdev1", 00:19:47.303 "uuid": "1fb009ae-3a90-420d-b189-cd4b40152505", 00:19:47.303 "strip_size_kb": 0, 00:19:47.303 "state": "online", 00:19:47.303 "raid_level": "raid1", 00:19:47.303 "superblock": true, 00:19:47.303 "num_base_bdevs": 2, 00:19:47.303 "num_base_bdevs_discovered": 1, 00:19:47.303 "num_base_bdevs_operational": 1, 00:19:47.303 "base_bdevs_list": [ 00:19:47.303 { 00:19:47.303 "name": null, 00:19:47.303 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:47.303 "is_configured": false, 00:19:47.303 "data_offset": 0, 00:19:47.303 "data_size": 7936 00:19:47.303 }, 00:19:47.303 { 00:19:47.303 "name": "BaseBdev2", 00:19:47.303 "uuid": "237d700a-5d7f-5b05-bb16-a154f86ea3fc", 00:19:47.303 "is_configured": true, 00:19:47.303 "data_offset": 256, 00:19:47.303 "data_size": 7936 00:19:47.303 } 00:19:47.303 ] 00:19:47.303 }' 00:19:47.303 21:46:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:47.303 21:46:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:47.303 21:46:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:47.303 21:46:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:47.303 21:46:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:47.303 21:46:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.303 21:46:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:47.303 [2024-12-10 21:46:47.957667] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:47.303 [2024-12-10 21:46:47.972408] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d330 00:19:47.303 21:46:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.303 21:46:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@663 -- # sleep 1 00:19:47.303 [2024-12-10 21:46:47.974168] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:48.242 21:46:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:48.242 21:46:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:48.242 21:46:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:48.242 21:46:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:48.242 21:46:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:48.242 21:46:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:48.242 21:46:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.242 21:46:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:48.242 21:46:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:48.242 21:46:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.500 21:46:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:48.500 "name": "raid_bdev1", 00:19:48.500 "uuid": "1fb009ae-3a90-420d-b189-cd4b40152505", 00:19:48.500 "strip_size_kb": 0, 00:19:48.500 "state": "online", 00:19:48.500 "raid_level": "raid1", 00:19:48.500 "superblock": true, 00:19:48.500 "num_base_bdevs": 2, 00:19:48.500 "num_base_bdevs_discovered": 2, 00:19:48.500 "num_base_bdevs_operational": 2, 00:19:48.500 "process": { 00:19:48.500 "type": "rebuild", 00:19:48.500 "target": "spare", 00:19:48.500 "progress": { 00:19:48.500 "blocks": 2560, 00:19:48.500 "percent": 32 00:19:48.500 } 00:19:48.500 }, 00:19:48.500 "base_bdevs_list": [ 00:19:48.500 { 00:19:48.500 "name": "spare", 00:19:48.500 "uuid": "9fab66c6-53e5-5cf5-8e30-3d3000490e02", 00:19:48.500 "is_configured": true, 00:19:48.500 "data_offset": 256, 00:19:48.500 "data_size": 7936 00:19:48.500 }, 00:19:48.500 { 00:19:48.500 "name": "BaseBdev2", 00:19:48.500 "uuid": "237d700a-5d7f-5b05-bb16-a154f86ea3fc", 00:19:48.500 "is_configured": true, 00:19:48.500 "data_offset": 256, 00:19:48.500 "data_size": 7936 00:19:48.500 } 00:19:48.500 ] 00:19:48.500 }' 00:19:48.500 21:46:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:48.500 21:46:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:48.500 21:46:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:48.500 21:46:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:48.500 21:46:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:19:48.500 21:46:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:19:48.500 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:19:48.500 21:46:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:19:48.500 21:46:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:19:48.500 21:46:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:19:48.500 21:46:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@706 -- # local timeout=721 00:19:48.500 21:46:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:48.500 21:46:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:48.500 21:46:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:48.500 21:46:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:48.500 21:46:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:48.500 21:46:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:48.500 21:46:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:48.500 21:46:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.500 21:46:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:48.500 21:46:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:48.501 21:46:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.501 21:46:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:48.501 "name": "raid_bdev1", 00:19:48.501 "uuid": "1fb009ae-3a90-420d-b189-cd4b40152505", 00:19:48.501 "strip_size_kb": 0, 00:19:48.501 "state": "online", 00:19:48.501 "raid_level": "raid1", 00:19:48.501 "superblock": true, 00:19:48.501 "num_base_bdevs": 2, 00:19:48.501 "num_base_bdevs_discovered": 2, 00:19:48.501 "num_base_bdevs_operational": 2, 00:19:48.501 "process": { 00:19:48.501 "type": "rebuild", 00:19:48.501 "target": "spare", 00:19:48.501 "progress": { 00:19:48.501 "blocks": 2816, 00:19:48.501 "percent": 35 00:19:48.501 } 00:19:48.501 }, 00:19:48.501 "base_bdevs_list": [ 00:19:48.501 { 00:19:48.501 "name": "spare", 00:19:48.501 "uuid": "9fab66c6-53e5-5cf5-8e30-3d3000490e02", 00:19:48.501 "is_configured": true, 00:19:48.501 "data_offset": 256, 00:19:48.501 "data_size": 7936 00:19:48.501 }, 00:19:48.501 { 00:19:48.501 "name": "BaseBdev2", 00:19:48.501 "uuid": "237d700a-5d7f-5b05-bb16-a154f86ea3fc", 00:19:48.501 "is_configured": true, 00:19:48.501 "data_offset": 256, 00:19:48.501 "data_size": 7936 00:19:48.501 } 00:19:48.501 ] 00:19:48.501 }' 00:19:48.501 21:46:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:48.501 21:46:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:48.501 21:46:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:48.501 21:46:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:48.501 21:46:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:49.877 21:46:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:49.877 21:46:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:49.877 21:46:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:49.877 21:46:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:49.877 21:46:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:49.877 21:46:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:49.877 21:46:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:49.877 21:46:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.877 21:46:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:49.877 21:46:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:49.877 21:46:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.877 21:46:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:49.877 "name": "raid_bdev1", 00:19:49.877 "uuid": "1fb009ae-3a90-420d-b189-cd4b40152505", 00:19:49.877 "strip_size_kb": 0, 00:19:49.877 "state": "online", 00:19:49.877 "raid_level": "raid1", 00:19:49.877 "superblock": true, 00:19:49.877 "num_base_bdevs": 2, 00:19:49.877 "num_base_bdevs_discovered": 2, 00:19:49.877 "num_base_bdevs_operational": 2, 00:19:49.877 "process": { 00:19:49.877 "type": "rebuild", 00:19:49.877 "target": "spare", 00:19:49.877 "progress": { 00:19:49.877 "blocks": 5632, 00:19:49.877 "percent": 70 00:19:49.877 } 00:19:49.877 }, 00:19:49.877 "base_bdevs_list": [ 00:19:49.877 { 00:19:49.877 "name": "spare", 00:19:49.877 "uuid": "9fab66c6-53e5-5cf5-8e30-3d3000490e02", 00:19:49.877 "is_configured": true, 00:19:49.877 "data_offset": 256, 00:19:49.877 "data_size": 7936 00:19:49.877 }, 00:19:49.877 { 00:19:49.877 "name": "BaseBdev2", 00:19:49.877 "uuid": "237d700a-5d7f-5b05-bb16-a154f86ea3fc", 00:19:49.877 "is_configured": true, 00:19:49.877 "data_offset": 256, 00:19:49.877 "data_size": 7936 00:19:49.877 } 00:19:49.877 ] 00:19:49.877 }' 00:19:49.877 21:46:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:49.877 21:46:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:49.877 21:46:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:49.877 21:46:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:49.877 21:46:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:50.445 [2024-12-10 21:46:51.086995] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:19:50.445 [2024-12-10 21:46:51.087069] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:19:50.445 [2024-12-10 21:46:51.087179] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:50.704 21:46:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:50.704 21:46:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:50.704 21:46:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:50.704 21:46:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:50.704 21:46:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:50.704 21:46:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:50.704 21:46:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:50.704 21:46:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:50.704 21:46:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.704 21:46:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:50.704 21:46:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.704 21:46:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:50.704 "name": "raid_bdev1", 00:19:50.704 "uuid": "1fb009ae-3a90-420d-b189-cd4b40152505", 00:19:50.704 "strip_size_kb": 0, 00:19:50.704 "state": "online", 00:19:50.704 "raid_level": "raid1", 00:19:50.704 "superblock": true, 00:19:50.704 "num_base_bdevs": 2, 00:19:50.704 "num_base_bdevs_discovered": 2, 00:19:50.704 "num_base_bdevs_operational": 2, 00:19:50.704 "base_bdevs_list": [ 00:19:50.704 { 00:19:50.704 "name": "spare", 00:19:50.704 "uuid": "9fab66c6-53e5-5cf5-8e30-3d3000490e02", 00:19:50.704 "is_configured": true, 00:19:50.704 "data_offset": 256, 00:19:50.704 "data_size": 7936 00:19:50.704 }, 00:19:50.704 { 00:19:50.704 "name": "BaseBdev2", 00:19:50.704 "uuid": "237d700a-5d7f-5b05-bb16-a154f86ea3fc", 00:19:50.704 "is_configured": true, 00:19:50.704 "data_offset": 256, 00:19:50.704 "data_size": 7936 00:19:50.704 } 00:19:50.704 ] 00:19:50.704 }' 00:19:50.704 21:46:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:50.704 21:46:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:19:50.704 21:46:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:50.704 21:46:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:19:50.704 21:46:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@709 -- # break 00:19:50.704 21:46:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:50.704 21:46:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:50.704 21:46:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:50.704 21:46:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:50.704 21:46:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:51.008 21:46:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:51.008 21:46:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:51.008 21:46:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.008 21:46:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:51.008 21:46:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.008 21:46:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:51.008 "name": "raid_bdev1", 00:19:51.008 "uuid": "1fb009ae-3a90-420d-b189-cd4b40152505", 00:19:51.008 "strip_size_kb": 0, 00:19:51.008 "state": "online", 00:19:51.008 "raid_level": "raid1", 00:19:51.008 "superblock": true, 00:19:51.008 "num_base_bdevs": 2, 00:19:51.008 "num_base_bdevs_discovered": 2, 00:19:51.008 "num_base_bdevs_operational": 2, 00:19:51.008 "base_bdevs_list": [ 00:19:51.008 { 00:19:51.008 "name": "spare", 00:19:51.008 "uuid": "9fab66c6-53e5-5cf5-8e30-3d3000490e02", 00:19:51.008 "is_configured": true, 00:19:51.008 "data_offset": 256, 00:19:51.008 "data_size": 7936 00:19:51.008 }, 00:19:51.008 { 00:19:51.008 "name": "BaseBdev2", 00:19:51.008 "uuid": "237d700a-5d7f-5b05-bb16-a154f86ea3fc", 00:19:51.008 "is_configured": true, 00:19:51.008 "data_offset": 256, 00:19:51.008 "data_size": 7936 00:19:51.008 } 00:19:51.008 ] 00:19:51.008 }' 00:19:51.008 21:46:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:51.008 21:46:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:51.008 21:46:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:51.008 21:46:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:51.008 21:46:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:51.008 21:46:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:51.008 21:46:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:51.008 21:46:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:51.008 21:46:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:51.008 21:46:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:51.008 21:46:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:51.008 21:46:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:51.008 21:46:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:51.008 21:46:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:51.008 21:46:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:51.008 21:46:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:51.008 21:46:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.008 21:46:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:51.008 21:46:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.008 21:46:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:51.008 "name": "raid_bdev1", 00:19:51.008 "uuid": "1fb009ae-3a90-420d-b189-cd4b40152505", 00:19:51.008 "strip_size_kb": 0, 00:19:51.008 "state": "online", 00:19:51.008 "raid_level": "raid1", 00:19:51.008 "superblock": true, 00:19:51.008 "num_base_bdevs": 2, 00:19:51.008 "num_base_bdevs_discovered": 2, 00:19:51.008 "num_base_bdevs_operational": 2, 00:19:51.008 "base_bdevs_list": [ 00:19:51.008 { 00:19:51.008 "name": "spare", 00:19:51.008 "uuid": "9fab66c6-53e5-5cf5-8e30-3d3000490e02", 00:19:51.008 "is_configured": true, 00:19:51.008 "data_offset": 256, 00:19:51.008 "data_size": 7936 00:19:51.008 }, 00:19:51.008 { 00:19:51.008 "name": "BaseBdev2", 00:19:51.008 "uuid": "237d700a-5d7f-5b05-bb16-a154f86ea3fc", 00:19:51.008 "is_configured": true, 00:19:51.008 "data_offset": 256, 00:19:51.008 "data_size": 7936 00:19:51.008 } 00:19:51.008 ] 00:19:51.008 }' 00:19:51.008 21:46:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:51.008 21:46:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:51.286 21:46:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:51.286 21:46:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.286 21:46:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:51.286 [2024-12-10 21:46:52.053911] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:51.286 [2024-12-10 21:46:52.053997] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:51.286 [2024-12-10 21:46:52.054104] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:51.286 [2024-12-10 21:46:52.054197] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:51.286 [2024-12-10 21:46:52.054238] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:19:51.286 21:46:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.286 21:46:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:51.286 21:46:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.286 21:46:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:51.551 21:46:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # jq length 00:19:51.551 21:46:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.551 21:46:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:19:51.551 21:46:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:19:51.551 21:46:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:19:51.551 21:46:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:19:51.551 21:46:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:19:51.551 21:46:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:19:51.551 21:46:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:51.551 21:46:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:51.551 21:46:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:51.551 21:46:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:19:51.551 21:46:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:51.551 21:46:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:51.551 21:46:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:19:51.551 /dev/nbd0 00:19:51.551 21:46:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:51.551 21:46:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:51.551 21:46:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:19:51.551 21:46:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:19:51.551 21:46:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:51.551 21:46:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:51.551 21:46:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:19:51.551 21:46:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:19:51.551 21:46:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:51.551 21:46:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:51.551 21:46:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:51.551 1+0 records in 00:19:51.551 1+0 records out 00:19:51.551 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000386347 s, 10.6 MB/s 00:19:51.551 21:46:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:51.812 21:46:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:19:51.812 21:46:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:51.812 21:46:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:51.812 21:46:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:19:51.812 21:46:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:51.812 21:46:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:51.812 21:46:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:19:51.812 /dev/nbd1 00:19:51.812 21:46:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:19:51.812 21:46:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:19:51.812 21:46:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:19:51.812 21:46:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:19:51.812 21:46:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:51.812 21:46:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:51.812 21:46:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:19:51.812 21:46:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:19:51.812 21:46:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:51.812 21:46:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:51.812 21:46:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:51.812 1+0 records in 00:19:51.812 1+0 records out 00:19:51.812 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000226524 s, 18.1 MB/s 00:19:51.812 21:46:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:51.812 21:46:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:19:51.812 21:46:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:51.812 21:46:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:51.812 21:46:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:19:51.812 21:46:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:51.812 21:46:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:51.812 21:46:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:19:52.072 21:46:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:19:52.072 21:46:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:19:52.072 21:46:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:52.072 21:46:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:52.072 21:46:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:19:52.072 21:46:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:52.072 21:46:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:19:52.330 21:46:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:52.330 21:46:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:52.330 21:46:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:52.330 21:46:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:52.330 21:46:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:52.330 21:46:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:52.330 21:46:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:19:52.330 21:46:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:19:52.330 21:46:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:52.330 21:46:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:19:52.590 21:46:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:19:52.590 21:46:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:19:52.590 21:46:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:19:52.590 21:46:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:52.590 21:46:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:52.590 21:46:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:19:52.590 21:46:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:19:52.590 21:46:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:19:52.590 21:46:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:19:52.590 21:46:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:19:52.590 21:46:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.590 21:46:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:52.590 21:46:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.590 21:46:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:52.590 21:46:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.590 21:46:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:52.590 [2024-12-10 21:46:53.183571] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:52.590 [2024-12-10 21:46:53.183628] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:52.590 [2024-12-10 21:46:53.183677] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:19:52.590 [2024-12-10 21:46:53.183686] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:52.590 [2024-12-10 21:46:53.185648] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:52.590 [2024-12-10 21:46:53.185686] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:52.590 [2024-12-10 21:46:53.185753] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:19:52.590 [2024-12-10 21:46:53.185814] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:52.590 [2024-12-10 21:46:53.185952] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:52.590 spare 00:19:52.590 21:46:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.590 21:46:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:19:52.590 21:46:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.590 21:46:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:52.590 [2024-12-10 21:46:53.285855] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:19:52.590 [2024-12-10 21:46:53.285930] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:19:52.590 [2024-12-10 21:46:53.286063] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:19:52.590 [2024-12-10 21:46:53.286245] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:19:52.590 [2024-12-10 21:46:53.286283] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:19:52.590 [2024-12-10 21:46:53.286466] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:52.590 21:46:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.590 21:46:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:52.590 21:46:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:52.590 21:46:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:52.590 21:46:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:52.590 21:46:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:52.590 21:46:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:52.590 21:46:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:52.590 21:46:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:52.590 21:46:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:52.590 21:46:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:52.590 21:46:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:52.590 21:46:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:52.590 21:46:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.591 21:46:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:52.591 21:46:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.591 21:46:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:52.591 "name": "raid_bdev1", 00:19:52.591 "uuid": "1fb009ae-3a90-420d-b189-cd4b40152505", 00:19:52.591 "strip_size_kb": 0, 00:19:52.591 "state": "online", 00:19:52.591 "raid_level": "raid1", 00:19:52.591 "superblock": true, 00:19:52.591 "num_base_bdevs": 2, 00:19:52.591 "num_base_bdevs_discovered": 2, 00:19:52.591 "num_base_bdevs_operational": 2, 00:19:52.591 "base_bdevs_list": [ 00:19:52.591 { 00:19:52.591 "name": "spare", 00:19:52.591 "uuid": "9fab66c6-53e5-5cf5-8e30-3d3000490e02", 00:19:52.591 "is_configured": true, 00:19:52.591 "data_offset": 256, 00:19:52.591 "data_size": 7936 00:19:52.591 }, 00:19:52.591 { 00:19:52.591 "name": "BaseBdev2", 00:19:52.591 "uuid": "237d700a-5d7f-5b05-bb16-a154f86ea3fc", 00:19:52.591 "is_configured": true, 00:19:52.591 "data_offset": 256, 00:19:52.591 "data_size": 7936 00:19:52.591 } 00:19:52.591 ] 00:19:52.591 }' 00:19:52.591 21:46:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:52.591 21:46:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:53.160 21:46:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:53.160 21:46:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:53.160 21:46:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:53.160 21:46:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:53.160 21:46:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:53.160 21:46:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:53.161 21:46:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:53.161 21:46:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.161 21:46:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:53.161 21:46:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.161 21:46:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:53.161 "name": "raid_bdev1", 00:19:53.161 "uuid": "1fb009ae-3a90-420d-b189-cd4b40152505", 00:19:53.161 "strip_size_kb": 0, 00:19:53.161 "state": "online", 00:19:53.161 "raid_level": "raid1", 00:19:53.161 "superblock": true, 00:19:53.161 "num_base_bdevs": 2, 00:19:53.161 "num_base_bdevs_discovered": 2, 00:19:53.161 "num_base_bdevs_operational": 2, 00:19:53.161 "base_bdevs_list": [ 00:19:53.161 { 00:19:53.161 "name": "spare", 00:19:53.161 "uuid": "9fab66c6-53e5-5cf5-8e30-3d3000490e02", 00:19:53.161 "is_configured": true, 00:19:53.161 "data_offset": 256, 00:19:53.161 "data_size": 7936 00:19:53.161 }, 00:19:53.161 { 00:19:53.161 "name": "BaseBdev2", 00:19:53.161 "uuid": "237d700a-5d7f-5b05-bb16-a154f86ea3fc", 00:19:53.161 "is_configured": true, 00:19:53.161 "data_offset": 256, 00:19:53.161 "data_size": 7936 00:19:53.161 } 00:19:53.161 ] 00:19:53.161 }' 00:19:53.161 21:46:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:53.161 21:46:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:53.161 21:46:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:53.161 21:46:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:53.161 21:46:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:53.161 21:46:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:19:53.161 21:46:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.161 21:46:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:53.161 21:46:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.161 21:46:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:19:53.161 21:46:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:19:53.161 21:46:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.161 21:46:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:53.161 [2024-12-10 21:46:53.934351] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:53.419 21:46:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.419 21:46:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:53.419 21:46:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:53.419 21:46:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:53.419 21:46:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:53.419 21:46:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:53.419 21:46:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:53.419 21:46:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:53.419 21:46:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:53.419 21:46:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:53.419 21:46:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:53.420 21:46:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:53.420 21:46:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:53.420 21:46:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.420 21:46:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:53.420 21:46:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.420 21:46:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:53.420 "name": "raid_bdev1", 00:19:53.420 "uuid": "1fb009ae-3a90-420d-b189-cd4b40152505", 00:19:53.420 "strip_size_kb": 0, 00:19:53.420 "state": "online", 00:19:53.420 "raid_level": "raid1", 00:19:53.420 "superblock": true, 00:19:53.420 "num_base_bdevs": 2, 00:19:53.420 "num_base_bdevs_discovered": 1, 00:19:53.420 "num_base_bdevs_operational": 1, 00:19:53.420 "base_bdevs_list": [ 00:19:53.420 { 00:19:53.420 "name": null, 00:19:53.420 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:53.420 "is_configured": false, 00:19:53.420 "data_offset": 0, 00:19:53.420 "data_size": 7936 00:19:53.420 }, 00:19:53.420 { 00:19:53.420 "name": "BaseBdev2", 00:19:53.420 "uuid": "237d700a-5d7f-5b05-bb16-a154f86ea3fc", 00:19:53.420 "is_configured": true, 00:19:53.420 "data_offset": 256, 00:19:53.420 "data_size": 7936 00:19:53.420 } 00:19:53.420 ] 00:19:53.420 }' 00:19:53.420 21:46:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:53.420 21:46:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:53.679 21:46:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:53.679 21:46:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.679 21:46:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:53.679 [2024-12-10 21:46:54.393616] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:53.679 [2024-12-10 21:46:54.393894] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:19:53.679 [2024-12-10 21:46:54.393960] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:19:53.679 [2024-12-10 21:46:54.394051] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:53.679 [2024-12-10 21:46:54.408029] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:19:53.679 21:46:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.679 21:46:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@757 -- # sleep 1 00:19:53.679 [2024-12-10 21:46:54.409873] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:55.056 21:46:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:55.056 21:46:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:55.057 21:46:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:55.057 21:46:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:55.057 21:46:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:55.057 21:46:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:55.057 21:46:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:55.057 21:46:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.057 21:46:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:55.057 21:46:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.057 21:46:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:55.057 "name": "raid_bdev1", 00:19:55.057 "uuid": "1fb009ae-3a90-420d-b189-cd4b40152505", 00:19:55.057 "strip_size_kb": 0, 00:19:55.057 "state": "online", 00:19:55.057 "raid_level": "raid1", 00:19:55.057 "superblock": true, 00:19:55.057 "num_base_bdevs": 2, 00:19:55.057 "num_base_bdevs_discovered": 2, 00:19:55.057 "num_base_bdevs_operational": 2, 00:19:55.057 "process": { 00:19:55.057 "type": "rebuild", 00:19:55.057 "target": "spare", 00:19:55.057 "progress": { 00:19:55.057 "blocks": 2560, 00:19:55.057 "percent": 32 00:19:55.057 } 00:19:55.057 }, 00:19:55.057 "base_bdevs_list": [ 00:19:55.057 { 00:19:55.057 "name": "spare", 00:19:55.057 "uuid": "9fab66c6-53e5-5cf5-8e30-3d3000490e02", 00:19:55.057 "is_configured": true, 00:19:55.057 "data_offset": 256, 00:19:55.057 "data_size": 7936 00:19:55.057 }, 00:19:55.057 { 00:19:55.057 "name": "BaseBdev2", 00:19:55.057 "uuid": "237d700a-5d7f-5b05-bb16-a154f86ea3fc", 00:19:55.057 "is_configured": true, 00:19:55.057 "data_offset": 256, 00:19:55.057 "data_size": 7936 00:19:55.057 } 00:19:55.057 ] 00:19:55.057 }' 00:19:55.057 21:46:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:55.057 21:46:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:55.057 21:46:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:55.057 21:46:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:55.057 21:46:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:19:55.057 21:46:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.057 21:46:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:55.057 [2024-12-10 21:46:55.569810] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:55.057 [2024-12-10 21:46:55.615344] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:55.057 [2024-12-10 21:46:55.615422] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:55.057 [2024-12-10 21:46:55.615450] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:55.057 [2024-12-10 21:46:55.615487] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:55.057 21:46:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.057 21:46:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:55.057 21:46:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:55.057 21:46:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:55.057 21:46:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:55.057 21:46:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:55.057 21:46:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:55.057 21:46:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:55.057 21:46:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:55.057 21:46:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:55.057 21:46:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:55.057 21:46:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:55.057 21:46:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:55.057 21:46:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.057 21:46:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:55.057 21:46:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.057 21:46:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:55.057 "name": "raid_bdev1", 00:19:55.057 "uuid": "1fb009ae-3a90-420d-b189-cd4b40152505", 00:19:55.057 "strip_size_kb": 0, 00:19:55.057 "state": "online", 00:19:55.057 "raid_level": "raid1", 00:19:55.057 "superblock": true, 00:19:55.057 "num_base_bdevs": 2, 00:19:55.057 "num_base_bdevs_discovered": 1, 00:19:55.057 "num_base_bdevs_operational": 1, 00:19:55.057 "base_bdevs_list": [ 00:19:55.057 { 00:19:55.057 "name": null, 00:19:55.057 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:55.057 "is_configured": false, 00:19:55.057 "data_offset": 0, 00:19:55.057 "data_size": 7936 00:19:55.057 }, 00:19:55.057 { 00:19:55.057 "name": "BaseBdev2", 00:19:55.057 "uuid": "237d700a-5d7f-5b05-bb16-a154f86ea3fc", 00:19:55.057 "is_configured": true, 00:19:55.057 "data_offset": 256, 00:19:55.057 "data_size": 7936 00:19:55.057 } 00:19:55.057 ] 00:19:55.057 }' 00:19:55.057 21:46:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:55.057 21:46:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:55.319 21:46:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:55.319 21:46:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.319 21:46:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:55.319 [2024-12-10 21:46:56.066662] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:55.319 [2024-12-10 21:46:56.066787] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:55.319 [2024-12-10 21:46:56.066829] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:19:55.319 [2024-12-10 21:46:56.066858] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:55.319 [2024-12-10 21:46:56.067161] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:55.319 [2024-12-10 21:46:56.067218] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:55.319 [2024-12-10 21:46:56.067306] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:19:55.319 [2024-12-10 21:46:56.067348] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:19:55.319 [2024-12-10 21:46:56.067387] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:19:55.319 [2024-12-10 21:46:56.067442] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:55.319 [2024-12-10 21:46:56.081022] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1cf0 00:19:55.319 spare 00:19:55.319 21:46:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.319 21:46:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@764 -- # sleep 1 00:19:55.319 [2024-12-10 21:46:56.082826] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:56.696 21:46:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:56.696 21:46:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:56.696 21:46:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:56.696 21:46:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:56.696 21:46:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:56.696 21:46:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:56.696 21:46:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:56.696 21:46:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.696 21:46:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:56.696 21:46:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.697 21:46:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:56.697 "name": "raid_bdev1", 00:19:56.697 "uuid": "1fb009ae-3a90-420d-b189-cd4b40152505", 00:19:56.697 "strip_size_kb": 0, 00:19:56.697 "state": "online", 00:19:56.697 "raid_level": "raid1", 00:19:56.697 "superblock": true, 00:19:56.697 "num_base_bdevs": 2, 00:19:56.697 "num_base_bdevs_discovered": 2, 00:19:56.697 "num_base_bdevs_operational": 2, 00:19:56.697 "process": { 00:19:56.697 "type": "rebuild", 00:19:56.697 "target": "spare", 00:19:56.697 "progress": { 00:19:56.697 "blocks": 2560, 00:19:56.697 "percent": 32 00:19:56.697 } 00:19:56.697 }, 00:19:56.697 "base_bdevs_list": [ 00:19:56.697 { 00:19:56.697 "name": "spare", 00:19:56.697 "uuid": "9fab66c6-53e5-5cf5-8e30-3d3000490e02", 00:19:56.697 "is_configured": true, 00:19:56.697 "data_offset": 256, 00:19:56.697 "data_size": 7936 00:19:56.697 }, 00:19:56.697 { 00:19:56.697 "name": "BaseBdev2", 00:19:56.697 "uuid": "237d700a-5d7f-5b05-bb16-a154f86ea3fc", 00:19:56.697 "is_configured": true, 00:19:56.697 "data_offset": 256, 00:19:56.697 "data_size": 7936 00:19:56.697 } 00:19:56.697 ] 00:19:56.697 }' 00:19:56.697 21:46:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:56.697 21:46:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:56.697 21:46:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:56.697 21:46:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:56.697 21:46:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:19:56.697 21:46:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.697 21:46:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:56.697 [2024-12-10 21:46:57.222859] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:56.697 [2024-12-10 21:46:57.287874] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:56.697 [2024-12-10 21:46:57.287995] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:56.697 [2024-12-10 21:46:57.288054] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:56.697 [2024-12-10 21:46:57.288077] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:56.697 21:46:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.697 21:46:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:56.697 21:46:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:56.697 21:46:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:56.697 21:46:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:56.697 21:46:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:56.697 21:46:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:56.697 21:46:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:56.697 21:46:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:56.697 21:46:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:56.697 21:46:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:56.697 21:46:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:56.697 21:46:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:56.697 21:46:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.697 21:46:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:56.697 21:46:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.697 21:46:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:56.697 "name": "raid_bdev1", 00:19:56.697 "uuid": "1fb009ae-3a90-420d-b189-cd4b40152505", 00:19:56.697 "strip_size_kb": 0, 00:19:56.697 "state": "online", 00:19:56.697 "raid_level": "raid1", 00:19:56.697 "superblock": true, 00:19:56.697 "num_base_bdevs": 2, 00:19:56.697 "num_base_bdevs_discovered": 1, 00:19:56.697 "num_base_bdevs_operational": 1, 00:19:56.697 "base_bdevs_list": [ 00:19:56.697 { 00:19:56.697 "name": null, 00:19:56.697 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:56.697 "is_configured": false, 00:19:56.697 "data_offset": 0, 00:19:56.697 "data_size": 7936 00:19:56.697 }, 00:19:56.697 { 00:19:56.697 "name": "BaseBdev2", 00:19:56.697 "uuid": "237d700a-5d7f-5b05-bb16-a154f86ea3fc", 00:19:56.697 "is_configured": true, 00:19:56.697 "data_offset": 256, 00:19:56.697 "data_size": 7936 00:19:56.697 } 00:19:56.697 ] 00:19:56.697 }' 00:19:56.697 21:46:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:56.697 21:46:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:57.263 21:46:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:57.263 21:46:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:57.263 21:46:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:57.263 21:46:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:57.263 21:46:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:57.263 21:46:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:57.263 21:46:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:57.263 21:46:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.263 21:46:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:57.263 21:46:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.263 21:46:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:57.263 "name": "raid_bdev1", 00:19:57.263 "uuid": "1fb009ae-3a90-420d-b189-cd4b40152505", 00:19:57.263 "strip_size_kb": 0, 00:19:57.263 "state": "online", 00:19:57.263 "raid_level": "raid1", 00:19:57.263 "superblock": true, 00:19:57.263 "num_base_bdevs": 2, 00:19:57.263 "num_base_bdevs_discovered": 1, 00:19:57.263 "num_base_bdevs_operational": 1, 00:19:57.263 "base_bdevs_list": [ 00:19:57.263 { 00:19:57.263 "name": null, 00:19:57.263 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:57.263 "is_configured": false, 00:19:57.263 "data_offset": 0, 00:19:57.263 "data_size": 7936 00:19:57.263 }, 00:19:57.263 { 00:19:57.263 "name": "BaseBdev2", 00:19:57.263 "uuid": "237d700a-5d7f-5b05-bb16-a154f86ea3fc", 00:19:57.263 "is_configured": true, 00:19:57.263 "data_offset": 256, 00:19:57.263 "data_size": 7936 00:19:57.263 } 00:19:57.263 ] 00:19:57.263 }' 00:19:57.263 21:46:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:57.263 21:46:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:57.263 21:46:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:57.263 21:46:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:57.263 21:46:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:19:57.263 21:46:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.263 21:46:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:57.263 21:46:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.263 21:46:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:19:57.263 21:46:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.263 21:46:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:57.263 [2024-12-10 21:46:57.891860] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:19:57.263 [2024-12-10 21:46:57.891919] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:57.263 [2024-12-10 21:46:57.891957] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:19:57.263 [2024-12-10 21:46:57.891966] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:57.263 [2024-12-10 21:46:57.892186] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:57.263 [2024-12-10 21:46:57.892207] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:57.263 [2024-12-10 21:46:57.892260] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:19:57.263 [2024-12-10 21:46:57.892272] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:19:57.263 [2024-12-10 21:46:57.892284] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:19:57.263 [2024-12-10 21:46:57.892294] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:19:57.263 BaseBdev1 00:19:57.263 21:46:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.264 21:46:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@775 -- # sleep 1 00:19:58.200 21:46:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:58.200 21:46:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:58.200 21:46:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:58.200 21:46:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:58.200 21:46:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:58.200 21:46:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:58.200 21:46:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:58.200 21:46:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:58.200 21:46:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:58.200 21:46:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:58.200 21:46:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:58.200 21:46:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:58.200 21:46:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.200 21:46:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:58.200 21:46:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.200 21:46:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:58.200 "name": "raid_bdev1", 00:19:58.200 "uuid": "1fb009ae-3a90-420d-b189-cd4b40152505", 00:19:58.200 "strip_size_kb": 0, 00:19:58.200 "state": "online", 00:19:58.200 "raid_level": "raid1", 00:19:58.200 "superblock": true, 00:19:58.200 "num_base_bdevs": 2, 00:19:58.200 "num_base_bdevs_discovered": 1, 00:19:58.200 "num_base_bdevs_operational": 1, 00:19:58.200 "base_bdevs_list": [ 00:19:58.200 { 00:19:58.200 "name": null, 00:19:58.200 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:58.200 "is_configured": false, 00:19:58.200 "data_offset": 0, 00:19:58.200 "data_size": 7936 00:19:58.200 }, 00:19:58.200 { 00:19:58.200 "name": "BaseBdev2", 00:19:58.200 "uuid": "237d700a-5d7f-5b05-bb16-a154f86ea3fc", 00:19:58.200 "is_configured": true, 00:19:58.200 "data_offset": 256, 00:19:58.200 "data_size": 7936 00:19:58.200 } 00:19:58.200 ] 00:19:58.200 }' 00:19:58.200 21:46:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:58.200 21:46:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:58.767 21:46:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:58.767 21:46:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:58.767 21:46:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:58.767 21:46:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:58.767 21:46:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:58.767 21:46:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:58.767 21:46:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.767 21:46:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:58.767 21:46:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:58.767 21:46:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.767 21:46:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:58.767 "name": "raid_bdev1", 00:19:58.767 "uuid": "1fb009ae-3a90-420d-b189-cd4b40152505", 00:19:58.767 "strip_size_kb": 0, 00:19:58.767 "state": "online", 00:19:58.767 "raid_level": "raid1", 00:19:58.767 "superblock": true, 00:19:58.767 "num_base_bdevs": 2, 00:19:58.767 "num_base_bdevs_discovered": 1, 00:19:58.767 "num_base_bdevs_operational": 1, 00:19:58.767 "base_bdevs_list": [ 00:19:58.767 { 00:19:58.767 "name": null, 00:19:58.767 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:58.767 "is_configured": false, 00:19:58.767 "data_offset": 0, 00:19:58.767 "data_size": 7936 00:19:58.767 }, 00:19:58.767 { 00:19:58.767 "name": "BaseBdev2", 00:19:58.767 "uuid": "237d700a-5d7f-5b05-bb16-a154f86ea3fc", 00:19:58.767 "is_configured": true, 00:19:58.767 "data_offset": 256, 00:19:58.767 "data_size": 7936 00:19:58.767 } 00:19:58.767 ] 00:19:58.767 }' 00:19:58.767 21:46:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:58.767 21:46:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:58.767 21:46:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:58.767 21:46:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:58.767 21:46:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:58.767 21:46:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@652 -- # local es=0 00:19:58.767 21:46:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:58.767 21:46:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:19:58.767 21:46:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:58.767 21:46:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:19:58.767 21:46:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:58.767 21:46:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:58.767 21:46:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.767 21:46:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:58.767 [2024-12-10 21:46:59.481234] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:58.767 [2024-12-10 21:46:59.481412] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:19:58.767 [2024-12-10 21:46:59.481426] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:19:58.767 request: 00:19:58.767 { 00:19:58.767 "base_bdev": "BaseBdev1", 00:19:58.767 "raid_bdev": "raid_bdev1", 00:19:58.767 "method": "bdev_raid_add_base_bdev", 00:19:58.767 "req_id": 1 00:19:58.767 } 00:19:58.767 Got JSON-RPC error response 00:19:58.767 response: 00:19:58.767 { 00:19:58.767 "code": -22, 00:19:58.767 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:19:58.767 } 00:19:58.767 21:46:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:19:58.767 21:46:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@655 -- # es=1 00:19:58.767 21:46:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:58.767 21:46:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:58.767 21:46:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:58.767 21:46:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@779 -- # sleep 1 00:20:00.144 21:47:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:00.144 21:47:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:00.144 21:47:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:00.144 21:47:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:00.144 21:47:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:00.144 21:47:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:00.144 21:47:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:00.144 21:47:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:00.144 21:47:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:00.144 21:47:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:00.144 21:47:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:00.144 21:47:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:00.144 21:47:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.144 21:47:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:00.144 21:47:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.144 21:47:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:00.144 "name": "raid_bdev1", 00:20:00.144 "uuid": "1fb009ae-3a90-420d-b189-cd4b40152505", 00:20:00.144 "strip_size_kb": 0, 00:20:00.144 "state": "online", 00:20:00.144 "raid_level": "raid1", 00:20:00.144 "superblock": true, 00:20:00.144 "num_base_bdevs": 2, 00:20:00.144 "num_base_bdevs_discovered": 1, 00:20:00.144 "num_base_bdevs_operational": 1, 00:20:00.144 "base_bdevs_list": [ 00:20:00.144 { 00:20:00.144 "name": null, 00:20:00.144 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:00.144 "is_configured": false, 00:20:00.144 "data_offset": 0, 00:20:00.144 "data_size": 7936 00:20:00.144 }, 00:20:00.144 { 00:20:00.144 "name": "BaseBdev2", 00:20:00.144 "uuid": "237d700a-5d7f-5b05-bb16-a154f86ea3fc", 00:20:00.144 "is_configured": true, 00:20:00.144 "data_offset": 256, 00:20:00.144 "data_size": 7936 00:20:00.144 } 00:20:00.144 ] 00:20:00.144 }' 00:20:00.144 21:47:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:00.144 21:47:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:00.403 21:47:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:00.403 21:47:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:00.403 21:47:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:00.403 21:47:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:00.403 21:47:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:00.403 21:47:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:00.403 21:47:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:00.403 21:47:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.403 21:47:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:00.403 21:47:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.403 21:47:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:00.403 "name": "raid_bdev1", 00:20:00.403 "uuid": "1fb009ae-3a90-420d-b189-cd4b40152505", 00:20:00.403 "strip_size_kb": 0, 00:20:00.403 "state": "online", 00:20:00.403 "raid_level": "raid1", 00:20:00.403 "superblock": true, 00:20:00.403 "num_base_bdevs": 2, 00:20:00.403 "num_base_bdevs_discovered": 1, 00:20:00.403 "num_base_bdevs_operational": 1, 00:20:00.403 "base_bdevs_list": [ 00:20:00.403 { 00:20:00.403 "name": null, 00:20:00.403 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:00.403 "is_configured": false, 00:20:00.403 "data_offset": 0, 00:20:00.403 "data_size": 7936 00:20:00.403 }, 00:20:00.403 { 00:20:00.403 "name": "BaseBdev2", 00:20:00.403 "uuid": "237d700a-5d7f-5b05-bb16-a154f86ea3fc", 00:20:00.403 "is_configured": true, 00:20:00.403 "data_offset": 256, 00:20:00.403 "data_size": 7936 00:20:00.403 } 00:20:00.403 ] 00:20:00.403 }' 00:20:00.403 21:47:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:00.403 21:47:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:00.403 21:47:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:00.403 21:47:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:00.403 21:47:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@784 -- # killprocess 87929 00:20:00.403 21:47:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@954 -- # '[' -z 87929 ']' 00:20:00.403 21:47:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@958 -- # kill -0 87929 00:20:00.403 21:47:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@959 -- # uname 00:20:00.403 21:47:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:00.403 21:47:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87929 00:20:00.403 21:47:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:00.403 killing process with pid 87929 00:20:00.403 Received shutdown signal, test time was about 60.000000 seconds 00:20:00.403 00:20:00.403 Latency(us) 00:20:00.403 [2024-12-10T21:47:01.186Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:00.403 [2024-12-10T21:47:01.186Z] =================================================================================================================== 00:20:00.403 [2024-12-10T21:47:01.186Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:00.403 21:47:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:00.403 21:47:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87929' 00:20:00.403 21:47:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@973 -- # kill 87929 00:20:00.403 [2024-12-10 21:47:01.094132] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:00.403 [2024-12-10 21:47:01.094261] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:00.403 21:47:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@978 -- # wait 87929 00:20:00.404 [2024-12-10 21:47:01.094310] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:00.404 [2024-12-10 21:47:01.094320] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:20:00.663 [2024-12-10 21:47:01.412243] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:02.042 21:47:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@786 -- # return 0 00:20:02.042 00:20:02.042 real 0m19.561s 00:20:02.042 user 0m25.480s 00:20:02.042 sys 0m2.517s 00:20:02.042 ************************************ 00:20:02.042 END TEST raid_rebuild_test_sb_md_separate 00:20:02.042 ************************************ 00:20:02.042 21:47:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:02.042 21:47:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:02.042 21:47:02 bdev_raid -- bdev/bdev_raid.sh@1010 -- # base_malloc_params='-m 32 -i' 00:20:02.042 21:47:02 bdev_raid -- bdev/bdev_raid.sh@1011 -- # run_test raid_state_function_test_sb_md_interleaved raid_state_function_test raid1 2 true 00:20:02.042 21:47:02 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:20:02.042 21:47:02 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:02.042 21:47:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:02.042 ************************************ 00:20:02.042 START TEST raid_state_function_test_sb_md_interleaved 00:20:02.042 ************************************ 00:20:02.042 21:47:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:20:02.042 21:47:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:20:02.042 21:47:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:20:02.042 21:47:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:20:02.042 21:47:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:20:02.042 21:47:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:20:02.042 21:47:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:02.042 21:47:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:20:02.042 21:47:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:20:02.042 21:47:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:02.042 21:47:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:20:02.042 21:47:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:20:02.042 21:47:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:02.042 21:47:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:20:02.042 21:47:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:20:02.042 21:47:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:20:02.042 21:47:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # local strip_size 00:20:02.042 21:47:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:20:02.042 21:47:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:20:02.042 21:47:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:20:02.042 21:47:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:20:02.042 21:47:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:20:02.042 21:47:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:20:02.042 21:47:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@229 -- # raid_pid=88615 00:20:02.042 21:47:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:20:02.042 21:47:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 88615' 00:20:02.042 Process raid pid: 88615 00:20:02.042 21:47:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@231 -- # waitforlisten 88615 00:20:02.042 21:47:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 88615 ']' 00:20:02.042 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:02.042 21:47:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:02.042 21:47:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:02.042 21:47:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:02.042 21:47:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:02.042 21:47:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:02.042 [2024-12-10 21:47:02.676308] Starting SPDK v25.01-pre git sha1 cec5ba284 / DPDK 24.03.0 initialization... 00:20:02.042 [2024-12-10 21:47:02.676444] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:02.301 [2024-12-10 21:47:02.835407] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:02.301 [2024-12-10 21:47:02.947237] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:20:02.560 [2024-12-10 21:47:03.154260] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:02.560 [2024-12-10 21:47:03.154292] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:02.819 21:47:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:02.819 21:47:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:20:02.819 21:47:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:20:02.819 21:47:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.819 21:47:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:02.819 [2024-12-10 21:47:03.500421] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:02.819 [2024-12-10 21:47:03.500486] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:02.819 [2024-12-10 21:47:03.500497] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:02.819 [2024-12-10 21:47:03.500506] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:02.819 21:47:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.819 21:47:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:20:02.819 21:47:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:02.819 21:47:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:02.819 21:47:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:02.819 21:47:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:02.819 21:47:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:02.819 21:47:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:02.819 21:47:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:02.820 21:47:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:02.820 21:47:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:02.820 21:47:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:02.820 21:47:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.820 21:47:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:02.820 21:47:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:02.820 21:47:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.820 21:47:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:02.820 "name": "Existed_Raid", 00:20:02.820 "uuid": "4080e420-7717-4abf-8aee-23401bf7f01b", 00:20:02.820 "strip_size_kb": 0, 00:20:02.820 "state": "configuring", 00:20:02.820 "raid_level": "raid1", 00:20:02.820 "superblock": true, 00:20:02.820 "num_base_bdevs": 2, 00:20:02.820 "num_base_bdevs_discovered": 0, 00:20:02.820 "num_base_bdevs_operational": 2, 00:20:02.820 "base_bdevs_list": [ 00:20:02.820 { 00:20:02.820 "name": "BaseBdev1", 00:20:02.820 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:02.820 "is_configured": false, 00:20:02.820 "data_offset": 0, 00:20:02.820 "data_size": 0 00:20:02.820 }, 00:20:02.820 { 00:20:02.820 "name": "BaseBdev2", 00:20:02.820 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:02.820 "is_configured": false, 00:20:02.820 "data_offset": 0, 00:20:02.820 "data_size": 0 00:20:02.820 } 00:20:02.820 ] 00:20:02.820 }' 00:20:02.820 21:47:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:02.820 21:47:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:03.390 21:47:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:20:03.390 21:47:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.390 21:47:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:03.390 [2024-12-10 21:47:03.931623] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:03.390 [2024-12-10 21:47:03.931658] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:20:03.390 21:47:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.390 21:47:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:20:03.390 21:47:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.390 21:47:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:03.390 [2024-12-10 21:47:03.943589] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:03.390 [2024-12-10 21:47:03.943630] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:03.390 [2024-12-10 21:47:03.943639] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:03.390 [2024-12-10 21:47:03.943649] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:03.390 21:47:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.390 21:47:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1 00:20:03.390 21:47:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.390 21:47:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:03.390 [2024-12-10 21:47:03.989280] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:03.390 BaseBdev1 00:20:03.390 21:47:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.390 21:47:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:20:03.390 21:47:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:20:03.390 21:47:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:03.390 21:47:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@905 -- # local i 00:20:03.390 21:47:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:03.390 21:47:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:03.390 21:47:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:03.390 21:47:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.390 21:47:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:03.390 21:47:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.390 21:47:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:20:03.390 21:47:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.390 21:47:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:03.390 [ 00:20:03.390 { 00:20:03.390 "name": "BaseBdev1", 00:20:03.390 "aliases": [ 00:20:03.390 "878a6f1b-b8d5-478f-81b7-4dbdf1fcbe77" 00:20:03.390 ], 00:20:03.390 "product_name": "Malloc disk", 00:20:03.390 "block_size": 4128, 00:20:03.390 "num_blocks": 8192, 00:20:03.390 "uuid": "878a6f1b-b8d5-478f-81b7-4dbdf1fcbe77", 00:20:03.390 "md_size": 32, 00:20:03.390 "md_interleave": true, 00:20:03.390 "dif_type": 0, 00:20:03.390 "assigned_rate_limits": { 00:20:03.390 "rw_ios_per_sec": 0, 00:20:03.390 "rw_mbytes_per_sec": 0, 00:20:03.390 "r_mbytes_per_sec": 0, 00:20:03.390 "w_mbytes_per_sec": 0 00:20:03.390 }, 00:20:03.390 "claimed": true, 00:20:03.390 "claim_type": "exclusive_write", 00:20:03.390 "zoned": false, 00:20:03.390 "supported_io_types": { 00:20:03.390 "read": true, 00:20:03.390 "write": true, 00:20:03.390 "unmap": true, 00:20:03.390 "flush": true, 00:20:03.390 "reset": true, 00:20:03.390 "nvme_admin": false, 00:20:03.390 "nvme_io": false, 00:20:03.390 "nvme_io_md": false, 00:20:03.390 "write_zeroes": true, 00:20:03.390 "zcopy": true, 00:20:03.390 "get_zone_info": false, 00:20:03.390 "zone_management": false, 00:20:03.390 "zone_append": false, 00:20:03.390 "compare": false, 00:20:03.390 "compare_and_write": false, 00:20:03.390 "abort": true, 00:20:03.390 "seek_hole": false, 00:20:03.390 "seek_data": false, 00:20:03.390 "copy": true, 00:20:03.390 "nvme_iov_md": false 00:20:03.390 }, 00:20:03.390 "memory_domains": [ 00:20:03.390 { 00:20:03.390 "dma_device_id": "system", 00:20:03.390 "dma_device_type": 1 00:20:03.390 }, 00:20:03.390 { 00:20:03.390 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:03.390 "dma_device_type": 2 00:20:03.390 } 00:20:03.390 ], 00:20:03.390 "driver_specific": {} 00:20:03.390 } 00:20:03.390 ] 00:20:03.390 21:47:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.390 21:47:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@911 -- # return 0 00:20:03.390 21:47:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:20:03.390 21:47:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:03.390 21:47:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:03.390 21:47:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:03.391 21:47:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:03.391 21:47:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:03.391 21:47:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:03.391 21:47:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:03.391 21:47:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:03.391 21:47:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:03.391 21:47:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:03.391 21:47:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:03.391 21:47:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.391 21:47:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:03.391 21:47:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.391 21:47:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:03.391 "name": "Existed_Raid", 00:20:03.391 "uuid": "831b0dca-29c2-44e1-ba46-abb4edbbf22d", 00:20:03.391 "strip_size_kb": 0, 00:20:03.391 "state": "configuring", 00:20:03.391 "raid_level": "raid1", 00:20:03.391 "superblock": true, 00:20:03.391 "num_base_bdevs": 2, 00:20:03.391 "num_base_bdevs_discovered": 1, 00:20:03.391 "num_base_bdevs_operational": 2, 00:20:03.391 "base_bdevs_list": [ 00:20:03.391 { 00:20:03.391 "name": "BaseBdev1", 00:20:03.391 "uuid": "878a6f1b-b8d5-478f-81b7-4dbdf1fcbe77", 00:20:03.391 "is_configured": true, 00:20:03.391 "data_offset": 256, 00:20:03.391 "data_size": 7936 00:20:03.391 }, 00:20:03.391 { 00:20:03.391 "name": "BaseBdev2", 00:20:03.391 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:03.391 "is_configured": false, 00:20:03.391 "data_offset": 0, 00:20:03.391 "data_size": 0 00:20:03.391 } 00:20:03.391 ] 00:20:03.391 }' 00:20:03.391 21:47:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:03.391 21:47:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:03.990 21:47:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:20:03.990 21:47:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.990 21:47:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:03.990 [2024-12-10 21:47:04.480542] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:03.990 [2024-12-10 21:47:04.480643] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:20:03.990 21:47:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.990 21:47:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:20:03.990 21:47:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.990 21:47:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:03.990 [2024-12-10 21:47:04.492562] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:03.990 [2024-12-10 21:47:04.494396] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:03.990 [2024-12-10 21:47:04.494496] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:03.990 21:47:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.990 21:47:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:20:03.990 21:47:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:20:03.990 21:47:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:20:03.990 21:47:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:03.990 21:47:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:03.990 21:47:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:03.990 21:47:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:03.990 21:47:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:03.990 21:47:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:03.990 21:47:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:03.990 21:47:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:03.990 21:47:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:03.990 21:47:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:03.990 21:47:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:03.990 21:47:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.990 21:47:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:03.990 21:47:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.990 21:47:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:03.990 "name": "Existed_Raid", 00:20:03.990 "uuid": "0b29c799-fa42-4d83-90b9-537172fabc23", 00:20:03.990 "strip_size_kb": 0, 00:20:03.990 "state": "configuring", 00:20:03.990 "raid_level": "raid1", 00:20:03.990 "superblock": true, 00:20:03.990 "num_base_bdevs": 2, 00:20:03.990 "num_base_bdevs_discovered": 1, 00:20:03.990 "num_base_bdevs_operational": 2, 00:20:03.990 "base_bdevs_list": [ 00:20:03.990 { 00:20:03.990 "name": "BaseBdev1", 00:20:03.990 "uuid": "878a6f1b-b8d5-478f-81b7-4dbdf1fcbe77", 00:20:03.990 "is_configured": true, 00:20:03.990 "data_offset": 256, 00:20:03.990 "data_size": 7936 00:20:03.990 }, 00:20:03.991 { 00:20:03.991 "name": "BaseBdev2", 00:20:03.991 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:03.991 "is_configured": false, 00:20:03.991 "data_offset": 0, 00:20:03.991 "data_size": 0 00:20:03.991 } 00:20:03.991 ] 00:20:03.991 }' 00:20:03.991 21:47:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:03.991 21:47:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:04.255 21:47:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2 00:20:04.255 21:47:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.255 21:47:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:04.255 [2024-12-10 21:47:04.948845] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:04.255 [2024-12-10 21:47:04.949152] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:20:04.255 [2024-12-10 21:47:04.949189] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:20:04.255 [2024-12-10 21:47:04.949314] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:20:04.255 [2024-12-10 21:47:04.949437] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:20:04.255 [2024-12-10 21:47:04.949477] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:20:04.255 [2024-12-10 21:47:04.949570] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:04.255 BaseBdev2 00:20:04.255 21:47:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.255 21:47:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:20:04.255 21:47:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:20:04.255 21:47:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:04.255 21:47:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@905 -- # local i 00:20:04.255 21:47:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:04.255 21:47:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:04.255 21:47:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:04.255 21:47:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.255 21:47:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:04.255 21:47:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.255 21:47:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:20:04.255 21:47:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.255 21:47:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:04.255 [ 00:20:04.255 { 00:20:04.255 "name": "BaseBdev2", 00:20:04.255 "aliases": [ 00:20:04.255 "aaea01d0-791a-427f-97cf-c98188518c90" 00:20:04.255 ], 00:20:04.255 "product_name": "Malloc disk", 00:20:04.255 "block_size": 4128, 00:20:04.255 "num_blocks": 8192, 00:20:04.255 "uuid": "aaea01d0-791a-427f-97cf-c98188518c90", 00:20:04.255 "md_size": 32, 00:20:04.255 "md_interleave": true, 00:20:04.255 "dif_type": 0, 00:20:04.255 "assigned_rate_limits": { 00:20:04.255 "rw_ios_per_sec": 0, 00:20:04.255 "rw_mbytes_per_sec": 0, 00:20:04.255 "r_mbytes_per_sec": 0, 00:20:04.255 "w_mbytes_per_sec": 0 00:20:04.255 }, 00:20:04.255 "claimed": true, 00:20:04.255 "claim_type": "exclusive_write", 00:20:04.255 "zoned": false, 00:20:04.255 "supported_io_types": { 00:20:04.255 "read": true, 00:20:04.255 "write": true, 00:20:04.255 "unmap": true, 00:20:04.255 "flush": true, 00:20:04.255 "reset": true, 00:20:04.255 "nvme_admin": false, 00:20:04.255 "nvme_io": false, 00:20:04.255 "nvme_io_md": false, 00:20:04.255 "write_zeroes": true, 00:20:04.255 "zcopy": true, 00:20:04.255 "get_zone_info": false, 00:20:04.255 "zone_management": false, 00:20:04.255 "zone_append": false, 00:20:04.255 "compare": false, 00:20:04.255 "compare_and_write": false, 00:20:04.255 "abort": true, 00:20:04.255 "seek_hole": false, 00:20:04.255 "seek_data": false, 00:20:04.255 "copy": true, 00:20:04.255 "nvme_iov_md": false 00:20:04.255 }, 00:20:04.255 "memory_domains": [ 00:20:04.255 { 00:20:04.255 "dma_device_id": "system", 00:20:04.255 "dma_device_type": 1 00:20:04.255 }, 00:20:04.255 { 00:20:04.255 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:04.255 "dma_device_type": 2 00:20:04.255 } 00:20:04.255 ], 00:20:04.255 "driver_specific": {} 00:20:04.255 } 00:20:04.255 ] 00:20:04.255 21:47:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.255 21:47:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@911 -- # return 0 00:20:04.255 21:47:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:20:04.255 21:47:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:20:04.255 21:47:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:20:04.255 21:47:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:04.255 21:47:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:04.255 21:47:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:04.255 21:47:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:04.255 21:47:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:04.255 21:47:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:04.255 21:47:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:04.255 21:47:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:04.255 21:47:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:04.256 21:47:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:04.256 21:47:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:04.256 21:47:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.256 21:47:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:04.256 21:47:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.513 21:47:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:04.513 "name": "Existed_Raid", 00:20:04.513 "uuid": "0b29c799-fa42-4d83-90b9-537172fabc23", 00:20:04.513 "strip_size_kb": 0, 00:20:04.513 "state": "online", 00:20:04.513 "raid_level": "raid1", 00:20:04.513 "superblock": true, 00:20:04.513 "num_base_bdevs": 2, 00:20:04.513 "num_base_bdevs_discovered": 2, 00:20:04.513 "num_base_bdevs_operational": 2, 00:20:04.513 "base_bdevs_list": [ 00:20:04.513 { 00:20:04.513 "name": "BaseBdev1", 00:20:04.513 "uuid": "878a6f1b-b8d5-478f-81b7-4dbdf1fcbe77", 00:20:04.513 "is_configured": true, 00:20:04.513 "data_offset": 256, 00:20:04.513 "data_size": 7936 00:20:04.513 }, 00:20:04.513 { 00:20:04.513 "name": "BaseBdev2", 00:20:04.513 "uuid": "aaea01d0-791a-427f-97cf-c98188518c90", 00:20:04.513 "is_configured": true, 00:20:04.513 "data_offset": 256, 00:20:04.513 "data_size": 7936 00:20:04.513 } 00:20:04.513 ] 00:20:04.513 }' 00:20:04.513 21:47:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:04.513 21:47:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:04.772 21:47:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:20:04.772 21:47:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:20:04.772 21:47:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:20:04.772 21:47:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:20:04.772 21:47:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:20:04.772 21:47:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:20:04.772 21:47:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:20:04.772 21:47:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:20:04.772 21:47:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.772 21:47:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:04.772 [2024-12-10 21:47:05.432656] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:04.772 21:47:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.772 21:47:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:04.772 "name": "Existed_Raid", 00:20:04.772 "aliases": [ 00:20:04.772 "0b29c799-fa42-4d83-90b9-537172fabc23" 00:20:04.772 ], 00:20:04.772 "product_name": "Raid Volume", 00:20:04.772 "block_size": 4128, 00:20:04.772 "num_blocks": 7936, 00:20:04.772 "uuid": "0b29c799-fa42-4d83-90b9-537172fabc23", 00:20:04.772 "md_size": 32, 00:20:04.772 "md_interleave": true, 00:20:04.772 "dif_type": 0, 00:20:04.772 "assigned_rate_limits": { 00:20:04.772 "rw_ios_per_sec": 0, 00:20:04.772 "rw_mbytes_per_sec": 0, 00:20:04.772 "r_mbytes_per_sec": 0, 00:20:04.772 "w_mbytes_per_sec": 0 00:20:04.772 }, 00:20:04.772 "claimed": false, 00:20:04.772 "zoned": false, 00:20:04.772 "supported_io_types": { 00:20:04.772 "read": true, 00:20:04.772 "write": true, 00:20:04.772 "unmap": false, 00:20:04.772 "flush": false, 00:20:04.772 "reset": true, 00:20:04.772 "nvme_admin": false, 00:20:04.772 "nvme_io": false, 00:20:04.772 "nvme_io_md": false, 00:20:04.772 "write_zeroes": true, 00:20:04.772 "zcopy": false, 00:20:04.772 "get_zone_info": false, 00:20:04.772 "zone_management": false, 00:20:04.772 "zone_append": false, 00:20:04.772 "compare": false, 00:20:04.772 "compare_and_write": false, 00:20:04.772 "abort": false, 00:20:04.772 "seek_hole": false, 00:20:04.772 "seek_data": false, 00:20:04.772 "copy": false, 00:20:04.772 "nvme_iov_md": false 00:20:04.772 }, 00:20:04.772 "memory_domains": [ 00:20:04.772 { 00:20:04.772 "dma_device_id": "system", 00:20:04.772 "dma_device_type": 1 00:20:04.772 }, 00:20:04.772 { 00:20:04.772 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:04.772 "dma_device_type": 2 00:20:04.772 }, 00:20:04.772 { 00:20:04.772 "dma_device_id": "system", 00:20:04.772 "dma_device_type": 1 00:20:04.772 }, 00:20:04.772 { 00:20:04.772 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:04.772 "dma_device_type": 2 00:20:04.772 } 00:20:04.772 ], 00:20:04.772 "driver_specific": { 00:20:04.772 "raid": { 00:20:04.772 "uuid": "0b29c799-fa42-4d83-90b9-537172fabc23", 00:20:04.772 "strip_size_kb": 0, 00:20:04.772 "state": "online", 00:20:04.772 "raid_level": "raid1", 00:20:04.772 "superblock": true, 00:20:04.772 "num_base_bdevs": 2, 00:20:04.772 "num_base_bdevs_discovered": 2, 00:20:04.772 "num_base_bdevs_operational": 2, 00:20:04.772 "base_bdevs_list": [ 00:20:04.772 { 00:20:04.772 "name": "BaseBdev1", 00:20:04.772 "uuid": "878a6f1b-b8d5-478f-81b7-4dbdf1fcbe77", 00:20:04.772 "is_configured": true, 00:20:04.772 "data_offset": 256, 00:20:04.772 "data_size": 7936 00:20:04.772 }, 00:20:04.772 { 00:20:04.772 "name": "BaseBdev2", 00:20:04.772 "uuid": "aaea01d0-791a-427f-97cf-c98188518c90", 00:20:04.772 "is_configured": true, 00:20:04.772 "data_offset": 256, 00:20:04.772 "data_size": 7936 00:20:04.772 } 00:20:04.772 ] 00:20:04.772 } 00:20:04.772 } 00:20:04.772 }' 00:20:04.772 21:47:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:04.772 21:47:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:20:04.772 BaseBdev2' 00:20:04.772 21:47:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:05.031 21:47:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:20:05.031 21:47:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:05.031 21:47:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:20:05.031 21:47:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:05.031 21:47:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.031 21:47:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:05.031 21:47:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.031 21:47:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:20:05.031 21:47:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:20:05.031 21:47:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:05.031 21:47:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:05.031 21:47:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:20:05.031 21:47:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.031 21:47:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:05.031 21:47:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.031 21:47:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:20:05.031 21:47:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:20:05.031 21:47:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:20:05.031 21:47:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.031 21:47:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:05.031 [2024-12-10 21:47:05.671985] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:05.031 21:47:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.031 21:47:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@260 -- # local expected_state 00:20:05.031 21:47:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:20:05.031 21:47:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:20:05.031 21:47:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:20:05.031 21:47:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:20:05.031 21:47:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:20:05.031 21:47:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:05.031 21:47:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:05.031 21:47:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:05.031 21:47:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:05.031 21:47:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:05.031 21:47:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:05.031 21:47:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:05.031 21:47:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:05.031 21:47:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:05.031 21:47:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:05.031 21:47:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.031 21:47:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:05.031 21:47:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:05.031 21:47:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.289 21:47:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:05.289 "name": "Existed_Raid", 00:20:05.289 "uuid": "0b29c799-fa42-4d83-90b9-537172fabc23", 00:20:05.289 "strip_size_kb": 0, 00:20:05.289 "state": "online", 00:20:05.289 "raid_level": "raid1", 00:20:05.289 "superblock": true, 00:20:05.289 "num_base_bdevs": 2, 00:20:05.289 "num_base_bdevs_discovered": 1, 00:20:05.289 "num_base_bdevs_operational": 1, 00:20:05.289 "base_bdevs_list": [ 00:20:05.289 { 00:20:05.289 "name": null, 00:20:05.289 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:05.289 "is_configured": false, 00:20:05.289 "data_offset": 0, 00:20:05.289 "data_size": 7936 00:20:05.289 }, 00:20:05.289 { 00:20:05.289 "name": "BaseBdev2", 00:20:05.289 "uuid": "aaea01d0-791a-427f-97cf-c98188518c90", 00:20:05.289 "is_configured": true, 00:20:05.289 "data_offset": 256, 00:20:05.289 "data_size": 7936 00:20:05.289 } 00:20:05.289 ] 00:20:05.289 }' 00:20:05.289 21:47:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:05.289 21:47:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:05.548 21:47:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:20:05.548 21:47:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:20:05.548 21:47:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:05.548 21:47:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:20:05.548 21:47:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.548 21:47:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:05.548 21:47:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.548 21:47:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:20:05.548 21:47:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:05.548 21:47:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:20:05.548 21:47:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.548 21:47:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:05.548 [2024-12-10 21:47:06.238416] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:05.548 [2024-12-10 21:47:06.238578] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:05.807 [2024-12-10 21:47:06.333548] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:05.807 [2024-12-10 21:47:06.333679] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:05.807 [2024-12-10 21:47:06.333721] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:20:05.807 21:47:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.807 21:47:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:20:05.807 21:47:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:20:05.807 21:47:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:20:05.807 21:47:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:05.807 21:47:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.807 21:47:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:05.807 21:47:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.807 21:47:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:20:05.807 21:47:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:20:05.807 21:47:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:20:05.807 21:47:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@326 -- # killprocess 88615 00:20:05.807 21:47:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 88615 ']' 00:20:05.807 21:47:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 88615 00:20:05.807 21:47:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:20:05.807 21:47:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:05.807 21:47:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88615 00:20:05.807 21:47:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:05.807 21:47:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:05.807 21:47:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88615' 00:20:05.807 killing process with pid 88615 00:20:05.807 21:47:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@973 -- # kill 88615 00:20:05.807 [2024-12-10 21:47:06.414639] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:05.807 21:47:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@978 -- # wait 88615 00:20:05.807 [2024-12-10 21:47:06.433448] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:07.187 ************************************ 00:20:07.187 END TEST raid_state_function_test_sb_md_interleaved 00:20:07.187 ************************************ 00:20:07.187 21:47:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@328 -- # return 0 00:20:07.187 00:20:07.187 real 0m4.964s 00:20:07.187 user 0m7.161s 00:20:07.187 sys 0m0.815s 00:20:07.187 21:47:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:07.187 21:47:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:07.187 21:47:07 bdev_raid -- bdev/bdev_raid.sh@1012 -- # run_test raid_superblock_test_md_interleaved raid_superblock_test raid1 2 00:20:07.187 21:47:07 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:20:07.187 21:47:07 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:07.187 21:47:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:07.187 ************************************ 00:20:07.187 START TEST raid_superblock_test_md_interleaved 00:20:07.187 ************************************ 00:20:07.187 21:47:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:20:07.187 21:47:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:20:07.187 21:47:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:20:07.187 21:47:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:20:07.187 21:47:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:20:07.187 21:47:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:20:07.187 21:47:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:20:07.187 21:47:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:20:07.187 21:47:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:20:07.187 21:47:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:20:07.187 21:47:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@399 -- # local strip_size 00:20:07.187 21:47:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:20:07.187 21:47:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:20:07.187 21:47:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:20:07.187 21:47:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:20:07.187 21:47:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:20:07.187 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:07.187 21:47:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@412 -- # raid_pid=88862 00:20:07.187 21:47:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:20:07.187 21:47:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@413 -- # waitforlisten 88862 00:20:07.187 21:47:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 88862 ']' 00:20:07.187 21:47:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:07.187 21:47:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:07.187 21:47:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:07.187 21:47:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:07.187 21:47:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:07.187 [2024-12-10 21:47:07.691787] Starting SPDK v25.01-pre git sha1 cec5ba284 / DPDK 24.03.0 initialization... 00:20:07.187 [2024-12-10 21:47:07.691903] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88862 ] 00:20:07.187 [2024-12-10 21:47:07.863699] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:07.447 [2024-12-10 21:47:07.976576] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:20:07.447 [2024-12-10 21:47:08.167469] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:07.447 [2024-12-10 21:47:08.167526] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:08.016 21:47:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:08.016 21:47:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:20:08.016 21:47:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:20:08.016 21:47:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:20:08.016 21:47:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:20:08.016 21:47:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:20:08.016 21:47:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:20:08.016 21:47:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:08.016 21:47:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:20:08.016 21:47:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:08.016 21:47:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc1 00:20:08.016 21:47:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.016 21:47:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:08.016 malloc1 00:20:08.016 21:47:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.016 21:47:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:08.016 21:47:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.016 21:47:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:08.016 [2024-12-10 21:47:08.561876] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:08.016 [2024-12-10 21:47:08.561984] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:08.016 [2024-12-10 21:47:08.562024] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:20:08.016 [2024-12-10 21:47:08.562052] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:08.016 [2024-12-10 21:47:08.563824] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:08.016 [2024-12-10 21:47:08.563893] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:08.016 pt1 00:20:08.016 21:47:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.016 21:47:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:20:08.016 21:47:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:20:08.016 21:47:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:20:08.016 21:47:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:20:08.016 21:47:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:20:08.016 21:47:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:08.016 21:47:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:20:08.016 21:47:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:08.016 21:47:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc2 00:20:08.016 21:47:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.016 21:47:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:08.016 malloc2 00:20:08.016 21:47:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.016 21:47:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:08.016 21:47:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.016 21:47:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:08.016 [2024-12-10 21:47:08.621087] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:08.016 [2024-12-10 21:47:08.621191] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:08.016 [2024-12-10 21:47:08.621229] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:20:08.016 [2024-12-10 21:47:08.621261] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:08.016 [2024-12-10 21:47:08.623031] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:08.016 [2024-12-10 21:47:08.623104] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:08.016 pt2 00:20:08.016 21:47:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.016 21:47:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:20:08.016 21:47:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:20:08.016 21:47:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:20:08.016 21:47:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.016 21:47:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:08.016 [2024-12-10 21:47:08.633090] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:08.016 [2024-12-10 21:47:08.634861] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:08.016 [2024-12-10 21:47:08.635041] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:20:08.016 [2024-12-10 21:47:08.635054] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:20:08.016 [2024-12-10 21:47:08.635126] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:20:08.016 [2024-12-10 21:47:08.635194] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:20:08.016 [2024-12-10 21:47:08.635206] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:20:08.016 [2024-12-10 21:47:08.635268] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:08.016 21:47:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.016 21:47:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:08.016 21:47:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:08.016 21:47:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:08.016 21:47:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:08.016 21:47:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:08.016 21:47:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:08.016 21:47:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:08.016 21:47:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:08.016 21:47:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:08.016 21:47:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:08.016 21:47:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:08.016 21:47:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:08.016 21:47:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.016 21:47:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:08.016 21:47:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.016 21:47:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:08.016 "name": "raid_bdev1", 00:20:08.016 "uuid": "f338b92c-0165-4136-922c-bbee13e56625", 00:20:08.016 "strip_size_kb": 0, 00:20:08.016 "state": "online", 00:20:08.016 "raid_level": "raid1", 00:20:08.016 "superblock": true, 00:20:08.016 "num_base_bdevs": 2, 00:20:08.016 "num_base_bdevs_discovered": 2, 00:20:08.016 "num_base_bdevs_operational": 2, 00:20:08.016 "base_bdevs_list": [ 00:20:08.016 { 00:20:08.016 "name": "pt1", 00:20:08.016 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:08.016 "is_configured": true, 00:20:08.016 "data_offset": 256, 00:20:08.016 "data_size": 7936 00:20:08.016 }, 00:20:08.016 { 00:20:08.016 "name": "pt2", 00:20:08.016 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:08.016 "is_configured": true, 00:20:08.016 "data_offset": 256, 00:20:08.016 "data_size": 7936 00:20:08.016 } 00:20:08.016 ] 00:20:08.016 }' 00:20:08.016 21:47:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:08.016 21:47:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:08.586 21:47:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:20:08.586 21:47:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:20:08.586 21:47:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:20:08.586 21:47:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:20:08.586 21:47:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:20:08.586 21:47:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:20:08.586 21:47:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:08.586 21:47:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:20:08.586 21:47:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.586 21:47:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:08.586 [2024-12-10 21:47:09.116631] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:08.586 21:47:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.586 21:47:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:08.586 "name": "raid_bdev1", 00:20:08.586 "aliases": [ 00:20:08.586 "f338b92c-0165-4136-922c-bbee13e56625" 00:20:08.586 ], 00:20:08.586 "product_name": "Raid Volume", 00:20:08.586 "block_size": 4128, 00:20:08.586 "num_blocks": 7936, 00:20:08.586 "uuid": "f338b92c-0165-4136-922c-bbee13e56625", 00:20:08.586 "md_size": 32, 00:20:08.586 "md_interleave": true, 00:20:08.586 "dif_type": 0, 00:20:08.586 "assigned_rate_limits": { 00:20:08.586 "rw_ios_per_sec": 0, 00:20:08.586 "rw_mbytes_per_sec": 0, 00:20:08.586 "r_mbytes_per_sec": 0, 00:20:08.586 "w_mbytes_per_sec": 0 00:20:08.586 }, 00:20:08.586 "claimed": false, 00:20:08.586 "zoned": false, 00:20:08.586 "supported_io_types": { 00:20:08.586 "read": true, 00:20:08.586 "write": true, 00:20:08.586 "unmap": false, 00:20:08.586 "flush": false, 00:20:08.586 "reset": true, 00:20:08.586 "nvme_admin": false, 00:20:08.586 "nvme_io": false, 00:20:08.586 "nvme_io_md": false, 00:20:08.586 "write_zeroes": true, 00:20:08.586 "zcopy": false, 00:20:08.586 "get_zone_info": false, 00:20:08.586 "zone_management": false, 00:20:08.586 "zone_append": false, 00:20:08.586 "compare": false, 00:20:08.586 "compare_and_write": false, 00:20:08.586 "abort": false, 00:20:08.586 "seek_hole": false, 00:20:08.586 "seek_data": false, 00:20:08.586 "copy": false, 00:20:08.586 "nvme_iov_md": false 00:20:08.586 }, 00:20:08.586 "memory_domains": [ 00:20:08.586 { 00:20:08.586 "dma_device_id": "system", 00:20:08.586 "dma_device_type": 1 00:20:08.586 }, 00:20:08.586 { 00:20:08.586 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:08.586 "dma_device_type": 2 00:20:08.586 }, 00:20:08.586 { 00:20:08.586 "dma_device_id": "system", 00:20:08.586 "dma_device_type": 1 00:20:08.586 }, 00:20:08.586 { 00:20:08.586 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:08.586 "dma_device_type": 2 00:20:08.586 } 00:20:08.586 ], 00:20:08.586 "driver_specific": { 00:20:08.586 "raid": { 00:20:08.586 "uuid": "f338b92c-0165-4136-922c-bbee13e56625", 00:20:08.586 "strip_size_kb": 0, 00:20:08.586 "state": "online", 00:20:08.586 "raid_level": "raid1", 00:20:08.586 "superblock": true, 00:20:08.586 "num_base_bdevs": 2, 00:20:08.586 "num_base_bdevs_discovered": 2, 00:20:08.586 "num_base_bdevs_operational": 2, 00:20:08.586 "base_bdevs_list": [ 00:20:08.586 { 00:20:08.586 "name": "pt1", 00:20:08.586 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:08.586 "is_configured": true, 00:20:08.586 "data_offset": 256, 00:20:08.586 "data_size": 7936 00:20:08.586 }, 00:20:08.586 { 00:20:08.586 "name": "pt2", 00:20:08.586 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:08.586 "is_configured": true, 00:20:08.586 "data_offset": 256, 00:20:08.586 "data_size": 7936 00:20:08.587 } 00:20:08.587 ] 00:20:08.587 } 00:20:08.587 } 00:20:08.587 }' 00:20:08.587 21:47:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:08.587 21:47:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:20:08.587 pt2' 00:20:08.587 21:47:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:08.587 21:47:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:20:08.587 21:47:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:08.587 21:47:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:20:08.587 21:47:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.587 21:47:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:08.587 21:47:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:08.587 21:47:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.587 21:47:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:20:08.587 21:47:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:20:08.587 21:47:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:08.587 21:47:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:08.587 21:47:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:20:08.587 21:47:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.587 21:47:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:08.587 21:47:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.587 21:47:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:20:08.587 21:47:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:20:08.587 21:47:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:08.587 21:47:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.587 21:47:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:08.587 21:47:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:20:08.587 [2024-12-10 21:47:09.320141] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:08.587 21:47:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.587 21:47:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=f338b92c-0165-4136-922c-bbee13e56625 00:20:08.587 21:47:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@436 -- # '[' -z f338b92c-0165-4136-922c-bbee13e56625 ']' 00:20:08.587 21:47:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:08.587 21:47:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.587 21:47:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:08.845 [2024-12-10 21:47:09.367799] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:08.845 [2024-12-10 21:47:09.367823] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:08.845 [2024-12-10 21:47:09.367903] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:08.845 [2024-12-10 21:47:09.367962] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:08.845 [2024-12-10 21:47:09.367974] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:20:08.845 21:47:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.845 21:47:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:08.845 21:47:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:20:08.845 21:47:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.845 21:47:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:08.845 21:47:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.845 21:47:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:20:08.845 21:47:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:20:08.845 21:47:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:20:08.845 21:47:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:20:08.845 21:47:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.845 21:47:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:08.845 21:47:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.845 21:47:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:20:08.845 21:47:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:20:08.845 21:47:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.845 21:47:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:08.845 21:47:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.845 21:47:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:20:08.845 21:47:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:20:08.845 21:47:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.845 21:47:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:08.845 21:47:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.845 21:47:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:20:08.845 21:47:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:20:08.845 21:47:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@652 -- # local es=0 00:20:08.845 21:47:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:20:08.845 21:47:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:20:08.845 21:47:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:08.845 21:47:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:20:08.845 21:47:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:08.845 21:47:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:20:08.845 21:47:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.845 21:47:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:08.845 [2024-12-10 21:47:09.507573] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:20:08.846 [2024-12-10 21:47:09.509476] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:20:08.846 [2024-12-10 21:47:09.509550] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:20:08.846 [2024-12-10 21:47:09.509611] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:20:08.846 [2024-12-10 21:47:09.509626] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:08.846 [2024-12-10 21:47:09.509636] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:20:08.846 request: 00:20:08.846 { 00:20:08.846 "name": "raid_bdev1", 00:20:08.846 "raid_level": "raid1", 00:20:08.846 "base_bdevs": [ 00:20:08.846 "malloc1", 00:20:08.846 "malloc2" 00:20:08.846 ], 00:20:08.846 "superblock": false, 00:20:08.846 "method": "bdev_raid_create", 00:20:08.846 "req_id": 1 00:20:08.846 } 00:20:08.846 Got JSON-RPC error response 00:20:08.846 response: 00:20:08.846 { 00:20:08.846 "code": -17, 00:20:08.846 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:20:08.846 } 00:20:08.846 21:47:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:20:08.846 21:47:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@655 -- # es=1 00:20:08.846 21:47:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:08.846 21:47:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:08.846 21:47:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:08.846 21:47:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:20:08.846 21:47:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:08.846 21:47:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.846 21:47:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:08.846 21:47:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.846 21:47:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:20:08.846 21:47:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:20:08.846 21:47:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:08.846 21:47:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.846 21:47:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:08.846 [2024-12-10 21:47:09.563531] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:08.846 [2024-12-10 21:47:09.563624] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:08.846 [2024-12-10 21:47:09.563683] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:20:08.846 [2024-12-10 21:47:09.563714] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:08.846 [2024-12-10 21:47:09.565619] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:08.846 [2024-12-10 21:47:09.565690] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:08.846 [2024-12-10 21:47:09.565763] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:20:08.846 [2024-12-10 21:47:09.565848] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:08.846 pt1 00:20:08.846 21:47:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.846 21:47:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:20:08.846 21:47:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:08.846 21:47:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:08.846 21:47:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:08.846 21:47:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:08.846 21:47:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:08.846 21:47:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:08.846 21:47:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:08.846 21:47:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:08.846 21:47:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:08.846 21:47:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:08.846 21:47:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:08.846 21:47:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.846 21:47:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:08.846 21:47:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.846 21:47:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:08.846 "name": "raid_bdev1", 00:20:08.846 "uuid": "f338b92c-0165-4136-922c-bbee13e56625", 00:20:08.846 "strip_size_kb": 0, 00:20:08.846 "state": "configuring", 00:20:08.846 "raid_level": "raid1", 00:20:08.846 "superblock": true, 00:20:08.846 "num_base_bdevs": 2, 00:20:08.846 "num_base_bdevs_discovered": 1, 00:20:08.846 "num_base_bdevs_operational": 2, 00:20:08.846 "base_bdevs_list": [ 00:20:08.846 { 00:20:08.846 "name": "pt1", 00:20:08.846 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:08.846 "is_configured": true, 00:20:08.846 "data_offset": 256, 00:20:08.846 "data_size": 7936 00:20:08.846 }, 00:20:08.846 { 00:20:08.846 "name": null, 00:20:08.846 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:08.846 "is_configured": false, 00:20:08.846 "data_offset": 256, 00:20:08.846 "data_size": 7936 00:20:08.846 } 00:20:08.846 ] 00:20:08.846 }' 00:20:08.846 21:47:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:08.846 21:47:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:09.412 21:47:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:20:09.412 21:47:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:20:09.412 21:47:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:20:09.412 21:47:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:09.412 21:47:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.412 21:47:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:09.412 [2024-12-10 21:47:10.022761] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:09.412 [2024-12-10 21:47:10.022841] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:09.412 [2024-12-10 21:47:10.022863] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:20:09.412 [2024-12-10 21:47:10.022873] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:09.412 [2024-12-10 21:47:10.023042] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:09.412 [2024-12-10 21:47:10.023055] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:09.412 [2024-12-10 21:47:10.023107] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:20:09.412 [2024-12-10 21:47:10.023128] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:09.412 [2024-12-10 21:47:10.023206] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:20:09.412 [2024-12-10 21:47:10.023216] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:20:09.412 [2024-12-10 21:47:10.023288] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:20:09.412 [2024-12-10 21:47:10.023351] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:20:09.412 [2024-12-10 21:47:10.023358] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:20:09.412 [2024-12-10 21:47:10.023432] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:09.412 pt2 00:20:09.412 21:47:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.412 21:47:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:20:09.412 21:47:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:20:09.412 21:47:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:09.412 21:47:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:09.412 21:47:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:09.412 21:47:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:09.412 21:47:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:09.412 21:47:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:09.412 21:47:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:09.412 21:47:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:09.412 21:47:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:09.412 21:47:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:09.412 21:47:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:09.413 21:47:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.413 21:47:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:09.413 21:47:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:09.413 21:47:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.413 21:47:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:09.413 "name": "raid_bdev1", 00:20:09.413 "uuid": "f338b92c-0165-4136-922c-bbee13e56625", 00:20:09.413 "strip_size_kb": 0, 00:20:09.413 "state": "online", 00:20:09.413 "raid_level": "raid1", 00:20:09.413 "superblock": true, 00:20:09.413 "num_base_bdevs": 2, 00:20:09.413 "num_base_bdevs_discovered": 2, 00:20:09.413 "num_base_bdevs_operational": 2, 00:20:09.413 "base_bdevs_list": [ 00:20:09.413 { 00:20:09.413 "name": "pt1", 00:20:09.413 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:09.413 "is_configured": true, 00:20:09.413 "data_offset": 256, 00:20:09.413 "data_size": 7936 00:20:09.413 }, 00:20:09.413 { 00:20:09.413 "name": "pt2", 00:20:09.413 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:09.413 "is_configured": true, 00:20:09.413 "data_offset": 256, 00:20:09.413 "data_size": 7936 00:20:09.413 } 00:20:09.413 ] 00:20:09.413 }' 00:20:09.413 21:47:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:09.413 21:47:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:09.671 21:47:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:20:09.671 21:47:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:20:09.671 21:47:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:20:09.671 21:47:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:20:09.671 21:47:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:20:09.671 21:47:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:20:09.672 21:47:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:20:09.672 21:47:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:09.672 21:47:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.672 21:47:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:09.931 [2024-12-10 21:47:10.458300] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:09.931 21:47:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.931 21:47:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:09.931 "name": "raid_bdev1", 00:20:09.931 "aliases": [ 00:20:09.931 "f338b92c-0165-4136-922c-bbee13e56625" 00:20:09.931 ], 00:20:09.931 "product_name": "Raid Volume", 00:20:09.931 "block_size": 4128, 00:20:09.931 "num_blocks": 7936, 00:20:09.931 "uuid": "f338b92c-0165-4136-922c-bbee13e56625", 00:20:09.931 "md_size": 32, 00:20:09.931 "md_interleave": true, 00:20:09.931 "dif_type": 0, 00:20:09.931 "assigned_rate_limits": { 00:20:09.931 "rw_ios_per_sec": 0, 00:20:09.931 "rw_mbytes_per_sec": 0, 00:20:09.931 "r_mbytes_per_sec": 0, 00:20:09.931 "w_mbytes_per_sec": 0 00:20:09.931 }, 00:20:09.931 "claimed": false, 00:20:09.931 "zoned": false, 00:20:09.931 "supported_io_types": { 00:20:09.931 "read": true, 00:20:09.931 "write": true, 00:20:09.931 "unmap": false, 00:20:09.931 "flush": false, 00:20:09.931 "reset": true, 00:20:09.931 "nvme_admin": false, 00:20:09.931 "nvme_io": false, 00:20:09.931 "nvme_io_md": false, 00:20:09.931 "write_zeroes": true, 00:20:09.931 "zcopy": false, 00:20:09.931 "get_zone_info": false, 00:20:09.931 "zone_management": false, 00:20:09.931 "zone_append": false, 00:20:09.931 "compare": false, 00:20:09.931 "compare_and_write": false, 00:20:09.931 "abort": false, 00:20:09.931 "seek_hole": false, 00:20:09.931 "seek_data": false, 00:20:09.931 "copy": false, 00:20:09.931 "nvme_iov_md": false 00:20:09.931 }, 00:20:09.931 "memory_domains": [ 00:20:09.931 { 00:20:09.932 "dma_device_id": "system", 00:20:09.932 "dma_device_type": 1 00:20:09.932 }, 00:20:09.932 { 00:20:09.932 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:09.932 "dma_device_type": 2 00:20:09.932 }, 00:20:09.932 { 00:20:09.932 "dma_device_id": "system", 00:20:09.932 "dma_device_type": 1 00:20:09.932 }, 00:20:09.932 { 00:20:09.932 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:09.932 "dma_device_type": 2 00:20:09.932 } 00:20:09.932 ], 00:20:09.932 "driver_specific": { 00:20:09.932 "raid": { 00:20:09.932 "uuid": "f338b92c-0165-4136-922c-bbee13e56625", 00:20:09.932 "strip_size_kb": 0, 00:20:09.932 "state": "online", 00:20:09.932 "raid_level": "raid1", 00:20:09.932 "superblock": true, 00:20:09.932 "num_base_bdevs": 2, 00:20:09.932 "num_base_bdevs_discovered": 2, 00:20:09.932 "num_base_bdevs_operational": 2, 00:20:09.932 "base_bdevs_list": [ 00:20:09.932 { 00:20:09.932 "name": "pt1", 00:20:09.932 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:09.932 "is_configured": true, 00:20:09.932 "data_offset": 256, 00:20:09.932 "data_size": 7936 00:20:09.932 }, 00:20:09.932 { 00:20:09.932 "name": "pt2", 00:20:09.932 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:09.932 "is_configured": true, 00:20:09.932 "data_offset": 256, 00:20:09.932 "data_size": 7936 00:20:09.932 } 00:20:09.932 ] 00:20:09.932 } 00:20:09.932 } 00:20:09.932 }' 00:20:09.932 21:47:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:09.932 21:47:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:20:09.932 pt2' 00:20:09.932 21:47:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:09.932 21:47:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:20:09.932 21:47:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:09.932 21:47:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:09.932 21:47:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:20:09.932 21:47:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.932 21:47:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:09.932 21:47:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.932 21:47:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:20:09.932 21:47:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:20:09.932 21:47:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:09.932 21:47:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:20:09.932 21:47:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.932 21:47:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:09.932 21:47:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:09.932 21:47:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.932 21:47:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:20:09.932 21:47:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:20:09.932 21:47:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:20:09.932 21:47:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:09.932 21:47:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.932 21:47:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:09.932 [2024-12-10 21:47:10.669902] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:09.932 21:47:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.932 21:47:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # '[' f338b92c-0165-4136-922c-bbee13e56625 '!=' f338b92c-0165-4136-922c-bbee13e56625 ']' 00:20:09.932 21:47:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:20:09.932 21:47:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:20:09.932 21:47:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:20:09.932 21:47:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:20:09.932 21:47:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.932 21:47:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:09.932 [2024-12-10 21:47:10.701612] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:20:09.932 21:47:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.932 21:47:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:09.932 21:47:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:09.932 21:47:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:09.932 21:47:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:09.932 21:47:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:09.932 21:47:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:09.932 21:47:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:09.932 21:47:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:09.932 21:47:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:09.932 21:47:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:10.192 21:47:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:10.192 21:47:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:10.192 21:47:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.192 21:47:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:10.192 21:47:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.192 21:47:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:10.192 "name": "raid_bdev1", 00:20:10.192 "uuid": "f338b92c-0165-4136-922c-bbee13e56625", 00:20:10.192 "strip_size_kb": 0, 00:20:10.192 "state": "online", 00:20:10.192 "raid_level": "raid1", 00:20:10.192 "superblock": true, 00:20:10.192 "num_base_bdevs": 2, 00:20:10.192 "num_base_bdevs_discovered": 1, 00:20:10.192 "num_base_bdevs_operational": 1, 00:20:10.192 "base_bdevs_list": [ 00:20:10.192 { 00:20:10.192 "name": null, 00:20:10.192 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:10.192 "is_configured": false, 00:20:10.192 "data_offset": 0, 00:20:10.192 "data_size": 7936 00:20:10.192 }, 00:20:10.192 { 00:20:10.192 "name": "pt2", 00:20:10.192 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:10.192 "is_configured": true, 00:20:10.192 "data_offset": 256, 00:20:10.192 "data_size": 7936 00:20:10.192 } 00:20:10.192 ] 00:20:10.192 }' 00:20:10.192 21:47:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:10.192 21:47:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:10.451 21:47:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:10.451 21:47:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.451 21:47:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:10.451 [2024-12-10 21:47:11.128879] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:10.451 [2024-12-10 21:47:11.128968] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:10.451 [2024-12-10 21:47:11.129075] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:10.451 [2024-12-10 21:47:11.129153] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:10.451 [2024-12-10 21:47:11.129201] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:20:10.451 21:47:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.451 21:47:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:20:10.451 21:47:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:10.451 21:47:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.451 21:47:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:10.451 21:47:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.451 21:47:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:20:10.451 21:47:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:20:10.451 21:47:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:20:10.451 21:47:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:20:10.451 21:47:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:20:10.451 21:47:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.451 21:47:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:10.451 21:47:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.451 21:47:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:20:10.451 21:47:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:20:10.451 21:47:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:20:10.452 21:47:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:20:10.452 21:47:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@519 -- # i=1 00:20:10.452 21:47:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:10.452 21:47:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.452 21:47:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:10.452 [2024-12-10 21:47:11.184780] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:10.452 [2024-12-10 21:47:11.184843] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:10.452 [2024-12-10 21:47:11.184862] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:20:10.452 [2024-12-10 21:47:11.184873] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:10.452 [2024-12-10 21:47:11.186818] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:10.452 [2024-12-10 21:47:11.186858] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:10.452 [2024-12-10 21:47:11.186912] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:20:10.452 [2024-12-10 21:47:11.186964] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:10.452 [2024-12-10 21:47:11.187027] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:20:10.452 [2024-12-10 21:47:11.187039] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:20:10.452 [2024-12-10 21:47:11.187124] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:20:10.452 [2024-12-10 21:47:11.187188] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:20:10.452 [2024-12-10 21:47:11.187196] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:20:10.452 [2024-12-10 21:47:11.187257] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:10.452 pt2 00:20:10.452 21:47:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.452 21:47:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:10.452 21:47:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:10.452 21:47:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:10.452 21:47:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:10.452 21:47:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:10.452 21:47:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:10.452 21:47:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:10.452 21:47:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:10.452 21:47:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:10.452 21:47:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:10.452 21:47:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:10.452 21:47:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:10.452 21:47:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.452 21:47:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:10.452 21:47:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.711 21:47:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:10.711 "name": "raid_bdev1", 00:20:10.711 "uuid": "f338b92c-0165-4136-922c-bbee13e56625", 00:20:10.711 "strip_size_kb": 0, 00:20:10.711 "state": "online", 00:20:10.711 "raid_level": "raid1", 00:20:10.711 "superblock": true, 00:20:10.711 "num_base_bdevs": 2, 00:20:10.711 "num_base_bdevs_discovered": 1, 00:20:10.711 "num_base_bdevs_operational": 1, 00:20:10.711 "base_bdevs_list": [ 00:20:10.711 { 00:20:10.711 "name": null, 00:20:10.711 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:10.711 "is_configured": false, 00:20:10.711 "data_offset": 256, 00:20:10.711 "data_size": 7936 00:20:10.711 }, 00:20:10.711 { 00:20:10.711 "name": "pt2", 00:20:10.711 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:10.711 "is_configured": true, 00:20:10.711 "data_offset": 256, 00:20:10.711 "data_size": 7936 00:20:10.711 } 00:20:10.711 ] 00:20:10.711 }' 00:20:10.711 21:47:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:10.711 21:47:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:10.970 21:47:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:10.970 21:47:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.970 21:47:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:10.970 [2024-12-10 21:47:11.639980] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:10.970 [2024-12-10 21:47:11.640067] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:10.970 [2024-12-10 21:47:11.640166] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:10.970 [2024-12-10 21:47:11.640243] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:10.970 [2024-12-10 21:47:11.640295] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:20:10.970 21:47:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.970 21:47:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:10.970 21:47:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.970 21:47:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:20:10.970 21:47:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:10.970 21:47:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.970 21:47:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:20:10.970 21:47:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:20:10.970 21:47:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:20:10.970 21:47:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:10.970 21:47:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.970 21:47:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:10.970 [2024-12-10 21:47:11.699887] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:10.970 [2024-12-10 21:47:11.699991] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:10.970 [2024-12-10 21:47:11.700029] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:20:10.970 [2024-12-10 21:47:11.700061] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:10.970 [2024-12-10 21:47:11.702037] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:10.970 [2024-12-10 21:47:11.702110] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:10.970 [2024-12-10 21:47:11.702187] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:20:10.970 [2024-12-10 21:47:11.702256] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:10.970 [2024-12-10 21:47:11.702371] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:20:10.970 [2024-12-10 21:47:11.702443] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:10.970 [2024-12-10 21:47:11.702486] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:20:10.970 [2024-12-10 21:47:11.702596] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:10.970 [2024-12-10 21:47:11.702703] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:20:10.970 [2024-12-10 21:47:11.702740] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:20:10.970 [2024-12-10 21:47:11.702831] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:20:10.970 [2024-12-10 21:47:11.702924] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:20:10.970 [2024-12-10 21:47:11.702959] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:20:10.970 [2024-12-10 21:47:11.703063] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:10.970 pt1 00:20:10.970 21:47:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.970 21:47:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:20:10.970 21:47:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:10.970 21:47:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:10.971 21:47:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:10.971 21:47:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:10.971 21:47:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:10.971 21:47:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:10.971 21:47:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:10.971 21:47:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:10.971 21:47:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:10.971 21:47:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:10.971 21:47:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:10.971 21:47:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:10.971 21:47:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.971 21:47:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:10.971 21:47:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.229 21:47:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:11.229 "name": "raid_bdev1", 00:20:11.229 "uuid": "f338b92c-0165-4136-922c-bbee13e56625", 00:20:11.229 "strip_size_kb": 0, 00:20:11.229 "state": "online", 00:20:11.229 "raid_level": "raid1", 00:20:11.229 "superblock": true, 00:20:11.229 "num_base_bdevs": 2, 00:20:11.229 "num_base_bdevs_discovered": 1, 00:20:11.229 "num_base_bdevs_operational": 1, 00:20:11.229 "base_bdevs_list": [ 00:20:11.229 { 00:20:11.229 "name": null, 00:20:11.229 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:11.229 "is_configured": false, 00:20:11.229 "data_offset": 256, 00:20:11.229 "data_size": 7936 00:20:11.229 }, 00:20:11.229 { 00:20:11.229 "name": "pt2", 00:20:11.229 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:11.229 "is_configured": true, 00:20:11.229 "data_offset": 256, 00:20:11.229 "data_size": 7936 00:20:11.229 } 00:20:11.229 ] 00:20:11.229 }' 00:20:11.229 21:47:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:11.229 21:47:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:11.487 21:47:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:20:11.487 21:47:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:20:11.487 21:47:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.487 21:47:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:11.487 21:47:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.487 21:47:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:20:11.487 21:47:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:20:11.487 21:47:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:11.487 21:47:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.487 21:47:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:11.487 [2024-12-10 21:47:12.199272] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:11.487 21:47:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.487 21:47:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # '[' f338b92c-0165-4136-922c-bbee13e56625 '!=' f338b92c-0165-4136-922c-bbee13e56625 ']' 00:20:11.487 21:47:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@563 -- # killprocess 88862 00:20:11.487 21:47:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 88862 ']' 00:20:11.487 21:47:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 88862 00:20:11.487 21:47:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:20:11.487 21:47:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:11.487 21:47:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88862 00:20:11.487 21:47:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:11.487 21:47:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:11.487 21:47:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88862' 00:20:11.487 killing process with pid 88862 00:20:11.487 21:47:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@973 -- # kill 88862 00:20:11.487 [2024-12-10 21:47:12.262806] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:11.487 21:47:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@978 -- # wait 88862 00:20:11.487 [2024-12-10 21:47:12.262970] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:11.487 [2024-12-10 21:47:12.263052] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:11.487 [2024-12-10 21:47:12.263100] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:20:11.746 [2024-12-10 21:47:12.472940] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:13.127 21:47:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@565 -- # return 0 00:20:13.127 00:20:13.127 real 0m6.005s 00:20:13.127 user 0m9.079s 00:20:13.127 sys 0m1.041s 00:20:13.127 21:47:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:13.127 ************************************ 00:20:13.127 END TEST raid_superblock_test_md_interleaved 00:20:13.127 ************************************ 00:20:13.127 21:47:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:13.127 21:47:13 bdev_raid -- bdev/bdev_raid.sh@1013 -- # run_test raid_rebuild_test_sb_md_interleaved raid_rebuild_test raid1 2 true false false 00:20:13.127 21:47:13 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:20:13.127 21:47:13 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:13.127 21:47:13 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:13.127 ************************************ 00:20:13.127 START TEST raid_rebuild_test_sb_md_interleaved 00:20:13.127 ************************************ 00:20:13.127 21:47:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false false 00:20:13.127 21:47:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:20:13.127 21:47:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:20:13.127 21:47:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:20:13.127 21:47:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:20:13.127 21:47:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # local verify=false 00:20:13.127 21:47:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:20:13.127 21:47:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:13.127 21:47:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:20:13.127 21:47:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:20:13.127 21:47:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:13.127 21:47:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:20:13.127 21:47:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:20:13.127 21:47:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:13.127 21:47:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:20:13.127 21:47:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:20:13.127 21:47:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:20:13.127 21:47:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # local strip_size 00:20:13.127 21:47:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@577 -- # local create_arg 00:20:13.127 21:47:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:20:13.127 21:47:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@579 -- # local data_offset 00:20:13.128 21:47:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:20:13.128 21:47:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:20:13.128 21:47:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:20:13.128 21:47:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:20:13.128 21:47:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@597 -- # raid_pid=89195 00:20:13.128 21:47:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:20:13.128 21:47:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@598 -- # waitforlisten 89195 00:20:13.128 21:47:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 89195 ']' 00:20:13.128 21:47:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:13.128 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:13.128 21:47:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:13.128 21:47:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:13.128 21:47:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:13.128 21:47:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:13.128 I/O size of 3145728 is greater than zero copy threshold (65536). 00:20:13.128 Zero copy mechanism will not be used. 00:20:13.128 [2024-12-10 21:47:13.775210] Starting SPDK v25.01-pre git sha1 cec5ba284 / DPDK 24.03.0 initialization... 00:20:13.128 [2024-12-10 21:47:13.775329] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89195 ] 00:20:13.386 [2024-12-10 21:47:13.945339] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:13.386 [2024-12-10 21:47:14.057968] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:20:13.645 [2024-12-10 21:47:14.247350] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:13.645 [2024-12-10 21:47:14.247406] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:13.904 21:47:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:13.904 21:47:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:20:13.905 21:47:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:20:13.905 21:47:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1_malloc 00:20:13.905 21:47:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.905 21:47:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:13.905 BaseBdev1_malloc 00:20:13.905 21:47:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.905 21:47:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:20:13.905 21:47:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.905 21:47:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:13.905 [2024-12-10 21:47:14.646189] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:20:13.905 [2024-12-10 21:47:14.646252] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:13.905 [2024-12-10 21:47:14.646273] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:20:13.905 [2024-12-10 21:47:14.646284] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:13.905 [2024-12-10 21:47:14.648211] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:13.905 [2024-12-10 21:47:14.648279] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:13.905 BaseBdev1 00:20:13.905 21:47:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.905 21:47:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:20:13.905 21:47:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2_malloc 00:20:13.905 21:47:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.905 21:47:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:14.164 BaseBdev2_malloc 00:20:14.164 21:47:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.164 21:47:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:20:14.164 21:47:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.164 21:47:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:14.164 [2024-12-10 21:47:14.698373] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:20:14.164 [2024-12-10 21:47:14.698452] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:14.164 [2024-12-10 21:47:14.698472] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:20:14.164 [2024-12-10 21:47:14.698484] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:14.164 [2024-12-10 21:47:14.700184] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:14.164 [2024-12-10 21:47:14.700229] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:20:14.164 BaseBdev2 00:20:14.164 21:47:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.164 21:47:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b spare_malloc 00:20:14.164 21:47:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.164 21:47:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:14.164 spare_malloc 00:20:14.164 21:47:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.164 21:47:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:20:14.164 21:47:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.164 21:47:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:14.164 spare_delay 00:20:14.164 21:47:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.164 21:47:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:20:14.164 21:47:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.164 21:47:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:14.164 [2024-12-10 21:47:14.774590] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:14.164 [2024-12-10 21:47:14.774651] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:14.164 [2024-12-10 21:47:14.774688] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:20:14.164 [2024-12-10 21:47:14.774699] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:14.164 [2024-12-10 21:47:14.776592] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:14.164 [2024-12-10 21:47:14.776635] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:14.164 spare 00:20:14.164 21:47:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.164 21:47:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:20:14.164 21:47:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.164 21:47:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:14.164 [2024-12-10 21:47:14.786616] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:14.164 [2024-12-10 21:47:14.788354] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:14.164 [2024-12-10 21:47:14.788560] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:20:14.164 [2024-12-10 21:47:14.788577] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:20:14.164 [2024-12-10 21:47:14.788644] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:20:14.164 [2024-12-10 21:47:14.788715] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:20:14.164 [2024-12-10 21:47:14.788723] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:20:14.164 [2024-12-10 21:47:14.788787] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:14.164 21:47:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.164 21:47:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:14.164 21:47:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:14.164 21:47:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:14.164 21:47:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:14.164 21:47:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:14.164 21:47:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:14.164 21:47:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:14.164 21:47:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:14.164 21:47:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:14.164 21:47:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:14.165 21:47:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:14.165 21:47:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:14.165 21:47:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.165 21:47:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:14.165 21:47:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.165 21:47:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:14.165 "name": "raid_bdev1", 00:20:14.165 "uuid": "74761ef9-46a3-4cb3-92d9-685be775e0d9", 00:20:14.165 "strip_size_kb": 0, 00:20:14.165 "state": "online", 00:20:14.165 "raid_level": "raid1", 00:20:14.165 "superblock": true, 00:20:14.165 "num_base_bdevs": 2, 00:20:14.165 "num_base_bdevs_discovered": 2, 00:20:14.165 "num_base_bdevs_operational": 2, 00:20:14.165 "base_bdevs_list": [ 00:20:14.165 { 00:20:14.165 "name": "BaseBdev1", 00:20:14.165 "uuid": "72489420-cc1d-5897-be6f-2e25b4c88cef", 00:20:14.165 "is_configured": true, 00:20:14.165 "data_offset": 256, 00:20:14.165 "data_size": 7936 00:20:14.165 }, 00:20:14.165 { 00:20:14.165 "name": "BaseBdev2", 00:20:14.165 "uuid": "36581ab8-8ab0-51b6-838c-80258cbb6039", 00:20:14.165 "is_configured": true, 00:20:14.165 "data_offset": 256, 00:20:14.165 "data_size": 7936 00:20:14.165 } 00:20:14.165 ] 00:20:14.165 }' 00:20:14.165 21:47:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:14.165 21:47:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:14.735 21:47:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:20:14.735 21:47:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:14.735 21:47:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.735 21:47:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:14.735 [2024-12-10 21:47:15.274076] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:14.735 21:47:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.735 21:47:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:20:14.735 21:47:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:14.735 21:47:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.735 21:47:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:14.735 21:47:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:20:14.735 21:47:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.735 21:47:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:20:14.735 21:47:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:20:14.735 21:47:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@624 -- # '[' false = true ']' 00:20:14.735 21:47:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:20:14.735 21:47:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.735 21:47:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:14.735 [2024-12-10 21:47:15.353666] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:14.735 21:47:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.735 21:47:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:14.735 21:47:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:14.735 21:47:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:14.735 21:47:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:14.735 21:47:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:14.735 21:47:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:14.735 21:47:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:14.735 21:47:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:14.735 21:47:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:14.735 21:47:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:14.735 21:47:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:14.735 21:47:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:14.735 21:47:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.735 21:47:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:14.735 21:47:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.735 21:47:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:14.735 "name": "raid_bdev1", 00:20:14.735 "uuid": "74761ef9-46a3-4cb3-92d9-685be775e0d9", 00:20:14.735 "strip_size_kb": 0, 00:20:14.735 "state": "online", 00:20:14.735 "raid_level": "raid1", 00:20:14.735 "superblock": true, 00:20:14.735 "num_base_bdevs": 2, 00:20:14.735 "num_base_bdevs_discovered": 1, 00:20:14.735 "num_base_bdevs_operational": 1, 00:20:14.735 "base_bdevs_list": [ 00:20:14.735 { 00:20:14.735 "name": null, 00:20:14.735 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:14.735 "is_configured": false, 00:20:14.735 "data_offset": 0, 00:20:14.735 "data_size": 7936 00:20:14.735 }, 00:20:14.735 { 00:20:14.735 "name": "BaseBdev2", 00:20:14.735 "uuid": "36581ab8-8ab0-51b6-838c-80258cbb6039", 00:20:14.735 "is_configured": true, 00:20:14.735 "data_offset": 256, 00:20:14.735 "data_size": 7936 00:20:14.735 } 00:20:14.735 ] 00:20:14.735 }' 00:20:14.735 21:47:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:14.735 21:47:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:15.305 21:47:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:20:15.305 21:47:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.305 21:47:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:15.305 [2024-12-10 21:47:15.808962] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:15.305 [2024-12-10 21:47:15.825065] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:20:15.305 21:47:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.305 21:47:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@647 -- # sleep 1 00:20:15.305 [2024-12-10 21:47:15.826914] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:16.271 21:47:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:16.271 21:47:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:16.271 21:47:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:16.271 21:47:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:16.271 21:47:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:16.271 21:47:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:16.271 21:47:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:16.271 21:47:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.271 21:47:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:16.271 21:47:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.271 21:47:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:16.271 "name": "raid_bdev1", 00:20:16.271 "uuid": "74761ef9-46a3-4cb3-92d9-685be775e0d9", 00:20:16.271 "strip_size_kb": 0, 00:20:16.271 "state": "online", 00:20:16.271 "raid_level": "raid1", 00:20:16.271 "superblock": true, 00:20:16.271 "num_base_bdevs": 2, 00:20:16.271 "num_base_bdevs_discovered": 2, 00:20:16.271 "num_base_bdevs_operational": 2, 00:20:16.271 "process": { 00:20:16.271 "type": "rebuild", 00:20:16.271 "target": "spare", 00:20:16.271 "progress": { 00:20:16.271 "blocks": 2560, 00:20:16.271 "percent": 32 00:20:16.271 } 00:20:16.271 }, 00:20:16.271 "base_bdevs_list": [ 00:20:16.271 { 00:20:16.271 "name": "spare", 00:20:16.271 "uuid": "601a7d16-3bae-5750-97be-56d1860d7236", 00:20:16.271 "is_configured": true, 00:20:16.271 "data_offset": 256, 00:20:16.271 "data_size": 7936 00:20:16.271 }, 00:20:16.271 { 00:20:16.271 "name": "BaseBdev2", 00:20:16.271 "uuid": "36581ab8-8ab0-51b6-838c-80258cbb6039", 00:20:16.271 "is_configured": true, 00:20:16.271 "data_offset": 256, 00:20:16.271 "data_size": 7936 00:20:16.271 } 00:20:16.271 ] 00:20:16.271 }' 00:20:16.272 21:47:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:16.272 21:47:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:16.272 21:47:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:16.272 21:47:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:16.272 21:47:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:20:16.272 21:47:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.272 21:47:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:16.272 [2024-12-10 21:47:16.966457] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:16.272 [2024-12-10 21:47:17.032683] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:16.272 [2024-12-10 21:47:17.032833] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:16.272 [2024-12-10 21:47:17.032882] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:16.272 [2024-12-10 21:47:17.032933] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:20:16.545 21:47:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.545 21:47:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:16.545 21:47:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:16.545 21:47:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:16.545 21:47:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:16.545 21:47:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:16.545 21:47:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:16.545 21:47:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:16.545 21:47:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:16.545 21:47:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:16.545 21:47:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:16.545 21:47:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:16.545 21:47:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:16.545 21:47:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.545 21:47:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:16.545 21:47:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.545 21:47:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:16.545 "name": "raid_bdev1", 00:20:16.545 "uuid": "74761ef9-46a3-4cb3-92d9-685be775e0d9", 00:20:16.545 "strip_size_kb": 0, 00:20:16.545 "state": "online", 00:20:16.545 "raid_level": "raid1", 00:20:16.545 "superblock": true, 00:20:16.545 "num_base_bdevs": 2, 00:20:16.545 "num_base_bdevs_discovered": 1, 00:20:16.545 "num_base_bdevs_operational": 1, 00:20:16.545 "base_bdevs_list": [ 00:20:16.545 { 00:20:16.545 "name": null, 00:20:16.545 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:16.545 "is_configured": false, 00:20:16.545 "data_offset": 0, 00:20:16.545 "data_size": 7936 00:20:16.545 }, 00:20:16.545 { 00:20:16.545 "name": "BaseBdev2", 00:20:16.545 "uuid": "36581ab8-8ab0-51b6-838c-80258cbb6039", 00:20:16.545 "is_configured": true, 00:20:16.545 "data_offset": 256, 00:20:16.545 "data_size": 7936 00:20:16.545 } 00:20:16.545 ] 00:20:16.545 }' 00:20:16.545 21:47:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:16.545 21:47:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:16.803 21:47:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:16.803 21:47:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:16.803 21:47:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:16.803 21:47:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:16.803 21:47:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:16.803 21:47:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:16.803 21:47:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:16.803 21:47:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.803 21:47:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:16.803 21:47:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.803 21:47:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:16.803 "name": "raid_bdev1", 00:20:16.803 "uuid": "74761ef9-46a3-4cb3-92d9-685be775e0d9", 00:20:16.803 "strip_size_kb": 0, 00:20:16.803 "state": "online", 00:20:16.803 "raid_level": "raid1", 00:20:16.803 "superblock": true, 00:20:16.803 "num_base_bdevs": 2, 00:20:16.803 "num_base_bdevs_discovered": 1, 00:20:16.803 "num_base_bdevs_operational": 1, 00:20:16.803 "base_bdevs_list": [ 00:20:16.803 { 00:20:16.803 "name": null, 00:20:16.803 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:16.803 "is_configured": false, 00:20:16.803 "data_offset": 0, 00:20:16.803 "data_size": 7936 00:20:16.803 }, 00:20:16.803 { 00:20:16.803 "name": "BaseBdev2", 00:20:16.803 "uuid": "36581ab8-8ab0-51b6-838c-80258cbb6039", 00:20:16.803 "is_configured": true, 00:20:16.803 "data_offset": 256, 00:20:16.803 "data_size": 7936 00:20:16.803 } 00:20:16.803 ] 00:20:16.803 }' 00:20:16.803 21:47:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:16.803 21:47:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:16.803 21:47:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:17.061 21:47:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:17.061 21:47:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:20:17.061 21:47:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.061 21:47:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:17.061 [2024-12-10 21:47:17.619118] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:17.061 [2024-12-10 21:47:17.635439] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:20:17.061 21:47:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.061 21:47:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@663 -- # sleep 1 00:20:17.061 [2024-12-10 21:47:17.637210] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:17.994 21:47:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:17.994 21:47:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:17.994 21:47:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:17.994 21:47:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:17.994 21:47:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:17.994 21:47:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:17.994 21:47:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:17.994 21:47:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.994 21:47:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:17.994 21:47:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.994 21:47:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:17.994 "name": "raid_bdev1", 00:20:17.994 "uuid": "74761ef9-46a3-4cb3-92d9-685be775e0d9", 00:20:17.994 "strip_size_kb": 0, 00:20:17.994 "state": "online", 00:20:17.994 "raid_level": "raid1", 00:20:17.994 "superblock": true, 00:20:17.994 "num_base_bdevs": 2, 00:20:17.994 "num_base_bdevs_discovered": 2, 00:20:17.994 "num_base_bdevs_operational": 2, 00:20:17.994 "process": { 00:20:17.994 "type": "rebuild", 00:20:17.994 "target": "spare", 00:20:17.994 "progress": { 00:20:17.994 "blocks": 2560, 00:20:17.994 "percent": 32 00:20:17.994 } 00:20:17.994 }, 00:20:17.994 "base_bdevs_list": [ 00:20:17.994 { 00:20:17.994 "name": "spare", 00:20:17.994 "uuid": "601a7d16-3bae-5750-97be-56d1860d7236", 00:20:17.994 "is_configured": true, 00:20:17.994 "data_offset": 256, 00:20:17.994 "data_size": 7936 00:20:17.994 }, 00:20:17.994 { 00:20:17.994 "name": "BaseBdev2", 00:20:17.994 "uuid": "36581ab8-8ab0-51b6-838c-80258cbb6039", 00:20:17.994 "is_configured": true, 00:20:17.994 "data_offset": 256, 00:20:17.994 "data_size": 7936 00:20:17.994 } 00:20:17.994 ] 00:20:17.994 }' 00:20:17.994 21:47:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:17.994 21:47:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:17.994 21:47:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:18.252 21:47:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:18.252 21:47:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:20:18.252 21:47:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:20:18.252 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:20:18.252 21:47:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:20:18.252 21:47:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:20:18.252 21:47:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:20:18.252 21:47:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # local timeout=750 00:20:18.252 21:47:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:18.252 21:47:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:18.252 21:47:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:18.252 21:47:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:18.252 21:47:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:18.252 21:47:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:18.253 21:47:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:18.253 21:47:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:18.253 21:47:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.253 21:47:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:18.253 21:47:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.253 21:47:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:18.253 "name": "raid_bdev1", 00:20:18.253 "uuid": "74761ef9-46a3-4cb3-92d9-685be775e0d9", 00:20:18.253 "strip_size_kb": 0, 00:20:18.253 "state": "online", 00:20:18.253 "raid_level": "raid1", 00:20:18.253 "superblock": true, 00:20:18.253 "num_base_bdevs": 2, 00:20:18.253 "num_base_bdevs_discovered": 2, 00:20:18.253 "num_base_bdevs_operational": 2, 00:20:18.253 "process": { 00:20:18.253 "type": "rebuild", 00:20:18.253 "target": "spare", 00:20:18.253 "progress": { 00:20:18.253 "blocks": 2816, 00:20:18.253 "percent": 35 00:20:18.253 } 00:20:18.253 }, 00:20:18.253 "base_bdevs_list": [ 00:20:18.253 { 00:20:18.253 "name": "spare", 00:20:18.253 "uuid": "601a7d16-3bae-5750-97be-56d1860d7236", 00:20:18.253 "is_configured": true, 00:20:18.253 "data_offset": 256, 00:20:18.253 "data_size": 7936 00:20:18.253 }, 00:20:18.253 { 00:20:18.253 "name": "BaseBdev2", 00:20:18.253 "uuid": "36581ab8-8ab0-51b6-838c-80258cbb6039", 00:20:18.253 "is_configured": true, 00:20:18.253 "data_offset": 256, 00:20:18.253 "data_size": 7936 00:20:18.253 } 00:20:18.253 ] 00:20:18.253 }' 00:20:18.253 21:47:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:18.253 21:47:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:18.253 21:47:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:18.253 21:47:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:18.253 21:47:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:19.187 21:47:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:19.187 21:47:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:19.187 21:47:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:19.187 21:47:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:19.187 21:47:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:19.187 21:47:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:19.187 21:47:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:19.187 21:47:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:19.187 21:47:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.187 21:47:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:19.187 21:47:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.445 21:47:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:19.446 "name": "raid_bdev1", 00:20:19.446 "uuid": "74761ef9-46a3-4cb3-92d9-685be775e0d9", 00:20:19.446 "strip_size_kb": 0, 00:20:19.446 "state": "online", 00:20:19.446 "raid_level": "raid1", 00:20:19.446 "superblock": true, 00:20:19.446 "num_base_bdevs": 2, 00:20:19.446 "num_base_bdevs_discovered": 2, 00:20:19.446 "num_base_bdevs_operational": 2, 00:20:19.446 "process": { 00:20:19.446 "type": "rebuild", 00:20:19.446 "target": "spare", 00:20:19.446 "progress": { 00:20:19.446 "blocks": 5632, 00:20:19.446 "percent": 70 00:20:19.446 } 00:20:19.446 }, 00:20:19.446 "base_bdevs_list": [ 00:20:19.446 { 00:20:19.446 "name": "spare", 00:20:19.446 "uuid": "601a7d16-3bae-5750-97be-56d1860d7236", 00:20:19.446 "is_configured": true, 00:20:19.446 "data_offset": 256, 00:20:19.446 "data_size": 7936 00:20:19.446 }, 00:20:19.446 { 00:20:19.446 "name": "BaseBdev2", 00:20:19.446 "uuid": "36581ab8-8ab0-51b6-838c-80258cbb6039", 00:20:19.446 "is_configured": true, 00:20:19.446 "data_offset": 256, 00:20:19.446 "data_size": 7936 00:20:19.446 } 00:20:19.446 ] 00:20:19.446 }' 00:20:19.446 21:47:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:19.446 21:47:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:19.446 21:47:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:19.446 21:47:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:19.446 21:47:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:20.011 [2024-12-10 21:47:20.750723] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:20:20.011 [2024-12-10 21:47:20.750806] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:20:20.011 [2024-12-10 21:47:20.750924] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:20.270 21:47:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:20.270 21:47:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:20.270 21:47:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:20.270 21:47:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:20.270 21:47:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:20.270 21:47:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:20.270 21:47:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:20.270 21:47:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:20.270 21:47:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.270 21:47:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:20.529 21:47:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.529 21:47:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:20.529 "name": "raid_bdev1", 00:20:20.529 "uuid": "74761ef9-46a3-4cb3-92d9-685be775e0d9", 00:20:20.529 "strip_size_kb": 0, 00:20:20.529 "state": "online", 00:20:20.529 "raid_level": "raid1", 00:20:20.529 "superblock": true, 00:20:20.529 "num_base_bdevs": 2, 00:20:20.530 "num_base_bdevs_discovered": 2, 00:20:20.530 "num_base_bdevs_operational": 2, 00:20:20.530 "base_bdevs_list": [ 00:20:20.530 { 00:20:20.530 "name": "spare", 00:20:20.530 "uuid": "601a7d16-3bae-5750-97be-56d1860d7236", 00:20:20.530 "is_configured": true, 00:20:20.530 "data_offset": 256, 00:20:20.530 "data_size": 7936 00:20:20.530 }, 00:20:20.530 { 00:20:20.530 "name": "BaseBdev2", 00:20:20.530 "uuid": "36581ab8-8ab0-51b6-838c-80258cbb6039", 00:20:20.530 "is_configured": true, 00:20:20.530 "data_offset": 256, 00:20:20.530 "data_size": 7936 00:20:20.530 } 00:20:20.530 ] 00:20:20.530 }' 00:20:20.530 21:47:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:20.530 21:47:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:20:20.530 21:47:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:20.530 21:47:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:20:20.530 21:47:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@709 -- # break 00:20:20.530 21:47:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:20.530 21:47:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:20.530 21:47:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:20.530 21:47:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:20.530 21:47:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:20.530 21:47:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:20.530 21:47:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.530 21:47:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:20.530 21:47:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:20.530 21:47:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.530 21:47:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:20.530 "name": "raid_bdev1", 00:20:20.530 "uuid": "74761ef9-46a3-4cb3-92d9-685be775e0d9", 00:20:20.530 "strip_size_kb": 0, 00:20:20.530 "state": "online", 00:20:20.530 "raid_level": "raid1", 00:20:20.530 "superblock": true, 00:20:20.530 "num_base_bdevs": 2, 00:20:20.530 "num_base_bdevs_discovered": 2, 00:20:20.530 "num_base_bdevs_operational": 2, 00:20:20.530 "base_bdevs_list": [ 00:20:20.530 { 00:20:20.530 "name": "spare", 00:20:20.530 "uuid": "601a7d16-3bae-5750-97be-56d1860d7236", 00:20:20.530 "is_configured": true, 00:20:20.530 "data_offset": 256, 00:20:20.530 "data_size": 7936 00:20:20.530 }, 00:20:20.530 { 00:20:20.530 "name": "BaseBdev2", 00:20:20.530 "uuid": "36581ab8-8ab0-51b6-838c-80258cbb6039", 00:20:20.530 "is_configured": true, 00:20:20.530 "data_offset": 256, 00:20:20.530 "data_size": 7936 00:20:20.530 } 00:20:20.530 ] 00:20:20.530 }' 00:20:20.530 21:47:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:20.530 21:47:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:20.530 21:47:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:20.789 21:47:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:20.789 21:47:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:20.789 21:47:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:20.789 21:47:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:20.789 21:47:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:20.789 21:47:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:20.789 21:47:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:20.789 21:47:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:20.789 21:47:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:20.789 21:47:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:20.789 21:47:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:20.789 21:47:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:20.789 21:47:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.789 21:47:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:20.789 21:47:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:20.789 21:47:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.789 21:47:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:20.789 "name": "raid_bdev1", 00:20:20.789 "uuid": "74761ef9-46a3-4cb3-92d9-685be775e0d9", 00:20:20.789 "strip_size_kb": 0, 00:20:20.789 "state": "online", 00:20:20.789 "raid_level": "raid1", 00:20:20.790 "superblock": true, 00:20:20.790 "num_base_bdevs": 2, 00:20:20.790 "num_base_bdevs_discovered": 2, 00:20:20.790 "num_base_bdevs_operational": 2, 00:20:20.790 "base_bdevs_list": [ 00:20:20.790 { 00:20:20.790 "name": "spare", 00:20:20.790 "uuid": "601a7d16-3bae-5750-97be-56d1860d7236", 00:20:20.790 "is_configured": true, 00:20:20.790 "data_offset": 256, 00:20:20.790 "data_size": 7936 00:20:20.790 }, 00:20:20.790 { 00:20:20.790 "name": "BaseBdev2", 00:20:20.790 "uuid": "36581ab8-8ab0-51b6-838c-80258cbb6039", 00:20:20.790 "is_configured": true, 00:20:20.790 "data_offset": 256, 00:20:20.790 "data_size": 7936 00:20:20.790 } 00:20:20.790 ] 00:20:20.790 }' 00:20:20.790 21:47:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:20.790 21:47:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:21.048 21:47:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:21.048 21:47:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.048 21:47:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:21.048 [2024-12-10 21:47:21.805052] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:21.048 [2024-12-10 21:47:21.805141] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:21.048 [2024-12-10 21:47:21.805265] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:21.048 [2024-12-10 21:47:21.805364] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:21.048 [2024-12-10 21:47:21.805422] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:20:21.048 21:47:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.048 21:47:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:21.048 21:47:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.049 21:47:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:21.049 21:47:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # jq length 00:20:21.049 21:47:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.307 21:47:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:20:21.307 21:47:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:20:21.307 21:47:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:20:21.307 21:47:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:20:21.307 21:47:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.307 21:47:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:21.307 21:47:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.307 21:47:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:20:21.307 21:47:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.307 21:47:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:21.307 [2024-12-10 21:47:21.872908] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:21.307 [2024-12-10 21:47:21.872968] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:21.307 [2024-12-10 21:47:21.873000] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:20:21.307 [2024-12-10 21:47:21.873009] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:21.307 [2024-12-10 21:47:21.874967] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:21.307 [2024-12-10 21:47:21.875002] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:21.307 [2024-12-10 21:47:21.875058] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:20:21.307 [2024-12-10 21:47:21.875107] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:21.307 [2024-12-10 21:47:21.875234] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:21.307 spare 00:20:21.307 21:47:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.307 21:47:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:20:21.307 21:47:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.307 21:47:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:21.307 [2024-12-10 21:47:21.975138] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:20:21.307 [2024-12-10 21:47:21.975167] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:20:21.307 [2024-12-10 21:47:21.975264] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:20:21.307 [2024-12-10 21:47:21.975363] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:20:21.307 [2024-12-10 21:47:21.975373] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:20:21.307 [2024-12-10 21:47:21.975492] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:21.307 21:47:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.307 21:47:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:21.307 21:47:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:21.307 21:47:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:21.307 21:47:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:21.307 21:47:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:21.307 21:47:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:21.307 21:47:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:21.307 21:47:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:21.307 21:47:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:21.307 21:47:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:21.307 21:47:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:21.307 21:47:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.307 21:47:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:21.307 21:47:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:21.307 21:47:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.307 21:47:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:21.307 "name": "raid_bdev1", 00:20:21.307 "uuid": "74761ef9-46a3-4cb3-92d9-685be775e0d9", 00:20:21.307 "strip_size_kb": 0, 00:20:21.307 "state": "online", 00:20:21.307 "raid_level": "raid1", 00:20:21.307 "superblock": true, 00:20:21.307 "num_base_bdevs": 2, 00:20:21.307 "num_base_bdevs_discovered": 2, 00:20:21.307 "num_base_bdevs_operational": 2, 00:20:21.307 "base_bdevs_list": [ 00:20:21.307 { 00:20:21.307 "name": "spare", 00:20:21.307 "uuid": "601a7d16-3bae-5750-97be-56d1860d7236", 00:20:21.307 "is_configured": true, 00:20:21.307 "data_offset": 256, 00:20:21.307 "data_size": 7936 00:20:21.307 }, 00:20:21.307 { 00:20:21.307 "name": "BaseBdev2", 00:20:21.307 "uuid": "36581ab8-8ab0-51b6-838c-80258cbb6039", 00:20:21.307 "is_configured": true, 00:20:21.307 "data_offset": 256, 00:20:21.307 "data_size": 7936 00:20:21.307 } 00:20:21.307 ] 00:20:21.307 }' 00:20:21.307 21:47:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:21.307 21:47:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:21.874 21:47:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:21.874 21:47:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:21.874 21:47:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:21.874 21:47:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:21.874 21:47:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:21.874 21:47:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:21.874 21:47:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.874 21:47:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:21.874 21:47:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:21.874 21:47:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.874 21:47:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:21.874 "name": "raid_bdev1", 00:20:21.874 "uuid": "74761ef9-46a3-4cb3-92d9-685be775e0d9", 00:20:21.874 "strip_size_kb": 0, 00:20:21.874 "state": "online", 00:20:21.874 "raid_level": "raid1", 00:20:21.874 "superblock": true, 00:20:21.874 "num_base_bdevs": 2, 00:20:21.874 "num_base_bdevs_discovered": 2, 00:20:21.874 "num_base_bdevs_operational": 2, 00:20:21.874 "base_bdevs_list": [ 00:20:21.874 { 00:20:21.874 "name": "spare", 00:20:21.874 "uuid": "601a7d16-3bae-5750-97be-56d1860d7236", 00:20:21.874 "is_configured": true, 00:20:21.874 "data_offset": 256, 00:20:21.874 "data_size": 7936 00:20:21.874 }, 00:20:21.874 { 00:20:21.874 "name": "BaseBdev2", 00:20:21.874 "uuid": "36581ab8-8ab0-51b6-838c-80258cbb6039", 00:20:21.874 "is_configured": true, 00:20:21.874 "data_offset": 256, 00:20:21.874 "data_size": 7936 00:20:21.874 } 00:20:21.874 ] 00:20:21.874 }' 00:20:21.874 21:47:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:21.874 21:47:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:21.874 21:47:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:21.874 21:47:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:21.874 21:47:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:20:21.874 21:47:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:21.874 21:47:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.874 21:47:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:21.874 21:47:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.874 21:47:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:20:21.874 21:47:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:20:21.874 21:47:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.874 21:47:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:21.874 [2024-12-10 21:47:22.552395] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:21.874 21:47:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.874 21:47:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:21.874 21:47:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:21.874 21:47:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:21.874 21:47:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:21.874 21:47:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:21.874 21:47:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:21.874 21:47:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:21.874 21:47:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:21.874 21:47:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:21.874 21:47:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:21.874 21:47:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:21.874 21:47:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:21.874 21:47:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.874 21:47:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:21.874 21:47:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.874 21:47:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:21.874 "name": "raid_bdev1", 00:20:21.874 "uuid": "74761ef9-46a3-4cb3-92d9-685be775e0d9", 00:20:21.874 "strip_size_kb": 0, 00:20:21.874 "state": "online", 00:20:21.874 "raid_level": "raid1", 00:20:21.874 "superblock": true, 00:20:21.874 "num_base_bdevs": 2, 00:20:21.874 "num_base_bdevs_discovered": 1, 00:20:21.874 "num_base_bdevs_operational": 1, 00:20:21.874 "base_bdevs_list": [ 00:20:21.874 { 00:20:21.874 "name": null, 00:20:21.874 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:21.874 "is_configured": false, 00:20:21.874 "data_offset": 0, 00:20:21.874 "data_size": 7936 00:20:21.874 }, 00:20:21.874 { 00:20:21.874 "name": "BaseBdev2", 00:20:21.874 "uuid": "36581ab8-8ab0-51b6-838c-80258cbb6039", 00:20:21.874 "is_configured": true, 00:20:21.874 "data_offset": 256, 00:20:21.874 "data_size": 7936 00:20:21.874 } 00:20:21.874 ] 00:20:21.874 }' 00:20:21.874 21:47:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:21.874 21:47:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:22.442 21:47:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:20:22.442 21:47:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.442 21:47:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:22.442 [2024-12-10 21:47:22.952412] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:22.442 [2024-12-10 21:47:22.952689] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:20:22.442 [2024-12-10 21:47:22.952757] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:20:22.442 [2024-12-10 21:47:22.952858] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:22.442 [2024-12-10 21:47:22.969262] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:20:22.442 21:47:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.442 21:47:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@757 -- # sleep 1 00:20:22.442 [2024-12-10 21:47:22.971155] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:23.376 21:47:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:23.376 21:47:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:23.376 21:47:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:23.376 21:47:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:23.376 21:47:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:23.376 21:47:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:23.376 21:47:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:23.376 21:47:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.376 21:47:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:23.376 21:47:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.376 21:47:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:23.376 "name": "raid_bdev1", 00:20:23.376 "uuid": "74761ef9-46a3-4cb3-92d9-685be775e0d9", 00:20:23.376 "strip_size_kb": 0, 00:20:23.376 "state": "online", 00:20:23.376 "raid_level": "raid1", 00:20:23.376 "superblock": true, 00:20:23.376 "num_base_bdevs": 2, 00:20:23.376 "num_base_bdevs_discovered": 2, 00:20:23.376 "num_base_bdevs_operational": 2, 00:20:23.376 "process": { 00:20:23.376 "type": "rebuild", 00:20:23.376 "target": "spare", 00:20:23.376 "progress": { 00:20:23.376 "blocks": 2560, 00:20:23.376 "percent": 32 00:20:23.376 } 00:20:23.376 }, 00:20:23.376 "base_bdevs_list": [ 00:20:23.376 { 00:20:23.376 "name": "spare", 00:20:23.376 "uuid": "601a7d16-3bae-5750-97be-56d1860d7236", 00:20:23.376 "is_configured": true, 00:20:23.376 "data_offset": 256, 00:20:23.376 "data_size": 7936 00:20:23.376 }, 00:20:23.376 { 00:20:23.376 "name": "BaseBdev2", 00:20:23.376 "uuid": "36581ab8-8ab0-51b6-838c-80258cbb6039", 00:20:23.376 "is_configured": true, 00:20:23.376 "data_offset": 256, 00:20:23.376 "data_size": 7936 00:20:23.376 } 00:20:23.376 ] 00:20:23.376 }' 00:20:23.376 21:47:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:23.376 21:47:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:23.376 21:47:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:23.376 21:47:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:23.376 21:47:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:20:23.376 21:47:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.376 21:47:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:23.376 [2024-12-10 21:47:24.094660] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:23.635 [2024-12-10 21:47:24.176923] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:23.635 [2024-12-10 21:47:24.177016] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:23.635 [2024-12-10 21:47:24.177032] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:23.635 [2024-12-10 21:47:24.177042] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:20:23.635 21:47:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.635 21:47:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:23.635 21:47:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:23.635 21:47:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:23.635 21:47:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:23.635 21:47:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:23.635 21:47:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:23.635 21:47:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:23.635 21:47:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:23.635 21:47:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:23.635 21:47:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:23.635 21:47:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:23.635 21:47:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:23.635 21:47:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.635 21:47:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:23.635 21:47:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.635 21:47:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:23.635 "name": "raid_bdev1", 00:20:23.635 "uuid": "74761ef9-46a3-4cb3-92d9-685be775e0d9", 00:20:23.635 "strip_size_kb": 0, 00:20:23.635 "state": "online", 00:20:23.635 "raid_level": "raid1", 00:20:23.635 "superblock": true, 00:20:23.635 "num_base_bdevs": 2, 00:20:23.635 "num_base_bdevs_discovered": 1, 00:20:23.635 "num_base_bdevs_operational": 1, 00:20:23.635 "base_bdevs_list": [ 00:20:23.635 { 00:20:23.635 "name": null, 00:20:23.635 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:23.635 "is_configured": false, 00:20:23.635 "data_offset": 0, 00:20:23.635 "data_size": 7936 00:20:23.635 }, 00:20:23.635 { 00:20:23.635 "name": "BaseBdev2", 00:20:23.635 "uuid": "36581ab8-8ab0-51b6-838c-80258cbb6039", 00:20:23.635 "is_configured": true, 00:20:23.635 "data_offset": 256, 00:20:23.635 "data_size": 7936 00:20:23.635 } 00:20:23.635 ] 00:20:23.635 }' 00:20:23.635 21:47:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:23.635 21:47:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:23.894 21:47:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:20:23.894 21:47:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.894 21:47:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:23.894 [2024-12-10 21:47:24.628573] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:23.894 [2024-12-10 21:47:24.628736] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:23.894 [2024-12-10 21:47:24.628798] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:20:23.894 [2024-12-10 21:47:24.628835] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:23.894 [2024-12-10 21:47:24.629072] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:23.894 [2024-12-10 21:47:24.629126] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:23.894 [2024-12-10 21:47:24.629220] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:20:23.894 [2024-12-10 21:47:24.629261] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:20:23.894 [2024-12-10 21:47:24.629305] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:20:23.894 [2024-12-10 21:47:24.629351] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:23.894 [2024-12-10 21:47:24.646072] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:20:23.894 spare 00:20:23.894 21:47:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.894 [2024-12-10 21:47:24.648025] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:23.894 21:47:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@764 -- # sleep 1 00:20:25.269 21:47:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:25.269 21:47:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:25.269 21:47:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:25.269 21:47:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:25.269 21:47:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:25.269 21:47:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:25.269 21:47:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:25.269 21:47:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.269 21:47:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:25.269 21:47:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.269 21:47:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:25.269 "name": "raid_bdev1", 00:20:25.269 "uuid": "74761ef9-46a3-4cb3-92d9-685be775e0d9", 00:20:25.269 "strip_size_kb": 0, 00:20:25.269 "state": "online", 00:20:25.269 "raid_level": "raid1", 00:20:25.269 "superblock": true, 00:20:25.269 "num_base_bdevs": 2, 00:20:25.269 "num_base_bdevs_discovered": 2, 00:20:25.269 "num_base_bdevs_operational": 2, 00:20:25.269 "process": { 00:20:25.269 "type": "rebuild", 00:20:25.269 "target": "spare", 00:20:25.269 "progress": { 00:20:25.269 "blocks": 2560, 00:20:25.269 "percent": 32 00:20:25.269 } 00:20:25.269 }, 00:20:25.269 "base_bdevs_list": [ 00:20:25.269 { 00:20:25.269 "name": "spare", 00:20:25.269 "uuid": "601a7d16-3bae-5750-97be-56d1860d7236", 00:20:25.269 "is_configured": true, 00:20:25.270 "data_offset": 256, 00:20:25.270 "data_size": 7936 00:20:25.270 }, 00:20:25.270 { 00:20:25.270 "name": "BaseBdev2", 00:20:25.270 "uuid": "36581ab8-8ab0-51b6-838c-80258cbb6039", 00:20:25.270 "is_configured": true, 00:20:25.270 "data_offset": 256, 00:20:25.270 "data_size": 7936 00:20:25.270 } 00:20:25.270 ] 00:20:25.270 }' 00:20:25.270 21:47:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:25.270 21:47:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:25.270 21:47:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:25.270 21:47:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:25.270 21:47:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:20:25.270 21:47:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.270 21:47:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:25.270 [2024-12-10 21:47:25.811597] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:25.270 [2024-12-10 21:47:25.853905] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:25.270 [2024-12-10 21:47:25.854064] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:25.270 [2024-12-10 21:47:25.854084] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:25.270 [2024-12-10 21:47:25.854092] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:20:25.270 21:47:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.270 21:47:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:25.270 21:47:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:25.270 21:47:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:25.270 21:47:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:25.270 21:47:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:25.270 21:47:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:25.270 21:47:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:25.270 21:47:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:25.270 21:47:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:25.270 21:47:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:25.270 21:47:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:25.270 21:47:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.270 21:47:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:25.270 21:47:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:25.270 21:47:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.270 21:47:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:25.270 "name": "raid_bdev1", 00:20:25.270 "uuid": "74761ef9-46a3-4cb3-92d9-685be775e0d9", 00:20:25.270 "strip_size_kb": 0, 00:20:25.270 "state": "online", 00:20:25.270 "raid_level": "raid1", 00:20:25.270 "superblock": true, 00:20:25.270 "num_base_bdevs": 2, 00:20:25.270 "num_base_bdevs_discovered": 1, 00:20:25.270 "num_base_bdevs_operational": 1, 00:20:25.270 "base_bdevs_list": [ 00:20:25.270 { 00:20:25.270 "name": null, 00:20:25.270 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:25.270 "is_configured": false, 00:20:25.270 "data_offset": 0, 00:20:25.270 "data_size": 7936 00:20:25.270 }, 00:20:25.270 { 00:20:25.270 "name": "BaseBdev2", 00:20:25.270 "uuid": "36581ab8-8ab0-51b6-838c-80258cbb6039", 00:20:25.270 "is_configured": true, 00:20:25.270 "data_offset": 256, 00:20:25.270 "data_size": 7936 00:20:25.270 } 00:20:25.270 ] 00:20:25.270 }' 00:20:25.270 21:47:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:25.270 21:47:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:25.837 21:47:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:25.837 21:47:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:25.837 21:47:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:25.837 21:47:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:25.837 21:47:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:25.837 21:47:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:25.837 21:47:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:25.837 21:47:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.837 21:47:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:25.837 21:47:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.837 21:47:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:25.837 "name": "raid_bdev1", 00:20:25.837 "uuid": "74761ef9-46a3-4cb3-92d9-685be775e0d9", 00:20:25.837 "strip_size_kb": 0, 00:20:25.837 "state": "online", 00:20:25.837 "raid_level": "raid1", 00:20:25.837 "superblock": true, 00:20:25.837 "num_base_bdevs": 2, 00:20:25.837 "num_base_bdevs_discovered": 1, 00:20:25.837 "num_base_bdevs_operational": 1, 00:20:25.837 "base_bdevs_list": [ 00:20:25.837 { 00:20:25.837 "name": null, 00:20:25.837 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:25.837 "is_configured": false, 00:20:25.837 "data_offset": 0, 00:20:25.837 "data_size": 7936 00:20:25.837 }, 00:20:25.837 { 00:20:25.837 "name": "BaseBdev2", 00:20:25.837 "uuid": "36581ab8-8ab0-51b6-838c-80258cbb6039", 00:20:25.837 "is_configured": true, 00:20:25.837 "data_offset": 256, 00:20:25.837 "data_size": 7936 00:20:25.837 } 00:20:25.837 ] 00:20:25.837 }' 00:20:25.837 21:47:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:25.837 21:47:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:25.837 21:47:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:25.837 21:47:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:25.837 21:47:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:20:25.837 21:47:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.837 21:47:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:25.837 21:47:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.837 21:47:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:20:25.837 21:47:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.837 21:47:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:25.837 [2024-12-10 21:47:26.467144] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:20:25.837 [2024-12-10 21:47:26.467208] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:25.837 [2024-12-10 21:47:26.467231] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:20:25.837 [2024-12-10 21:47:26.467240] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:25.837 [2024-12-10 21:47:26.467416] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:25.837 [2024-12-10 21:47:26.467446] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:25.837 [2024-12-10 21:47:26.467518] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:20:25.837 [2024-12-10 21:47:26.467532] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:20:25.837 [2024-12-10 21:47:26.467542] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:20:25.837 [2024-12-10 21:47:26.467552] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:20:25.837 BaseBdev1 00:20:25.837 21:47:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.837 21:47:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@775 -- # sleep 1 00:20:26.772 21:47:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:26.772 21:47:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:26.772 21:47:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:26.772 21:47:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:26.772 21:47:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:26.772 21:47:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:26.772 21:47:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:26.772 21:47:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:26.772 21:47:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:26.772 21:47:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:26.772 21:47:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:26.772 21:47:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:26.772 21:47:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.772 21:47:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:26.772 21:47:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.772 21:47:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:26.772 "name": "raid_bdev1", 00:20:26.772 "uuid": "74761ef9-46a3-4cb3-92d9-685be775e0d9", 00:20:26.772 "strip_size_kb": 0, 00:20:26.772 "state": "online", 00:20:26.772 "raid_level": "raid1", 00:20:26.772 "superblock": true, 00:20:26.772 "num_base_bdevs": 2, 00:20:26.772 "num_base_bdevs_discovered": 1, 00:20:26.772 "num_base_bdevs_operational": 1, 00:20:26.772 "base_bdevs_list": [ 00:20:26.772 { 00:20:26.772 "name": null, 00:20:26.772 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:26.772 "is_configured": false, 00:20:26.772 "data_offset": 0, 00:20:26.772 "data_size": 7936 00:20:26.772 }, 00:20:26.772 { 00:20:26.772 "name": "BaseBdev2", 00:20:26.772 "uuid": "36581ab8-8ab0-51b6-838c-80258cbb6039", 00:20:26.772 "is_configured": true, 00:20:26.772 "data_offset": 256, 00:20:26.772 "data_size": 7936 00:20:26.772 } 00:20:26.772 ] 00:20:26.772 }' 00:20:26.772 21:47:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:26.772 21:47:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:27.341 21:47:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:27.341 21:47:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:27.341 21:47:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:27.341 21:47:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:27.341 21:47:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:27.341 21:47:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:27.341 21:47:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.341 21:47:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:27.341 21:47:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:27.341 21:47:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.341 21:47:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:27.341 "name": "raid_bdev1", 00:20:27.341 "uuid": "74761ef9-46a3-4cb3-92d9-685be775e0d9", 00:20:27.341 "strip_size_kb": 0, 00:20:27.341 "state": "online", 00:20:27.341 "raid_level": "raid1", 00:20:27.341 "superblock": true, 00:20:27.341 "num_base_bdevs": 2, 00:20:27.341 "num_base_bdevs_discovered": 1, 00:20:27.341 "num_base_bdevs_operational": 1, 00:20:27.341 "base_bdevs_list": [ 00:20:27.341 { 00:20:27.341 "name": null, 00:20:27.341 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:27.341 "is_configured": false, 00:20:27.341 "data_offset": 0, 00:20:27.341 "data_size": 7936 00:20:27.341 }, 00:20:27.341 { 00:20:27.341 "name": "BaseBdev2", 00:20:27.341 "uuid": "36581ab8-8ab0-51b6-838c-80258cbb6039", 00:20:27.341 "is_configured": true, 00:20:27.341 "data_offset": 256, 00:20:27.341 "data_size": 7936 00:20:27.341 } 00:20:27.341 ] 00:20:27.341 }' 00:20:27.341 21:47:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:27.341 21:47:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:27.341 21:47:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:27.341 21:47:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:27.341 21:47:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:20:27.341 21:47:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@652 -- # local es=0 00:20:27.341 21:47:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:20:27.341 21:47:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:20:27.341 21:47:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:27.341 21:47:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:20:27.341 21:47:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:27.341 21:47:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:20:27.341 21:47:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.341 21:47:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:27.341 [2024-12-10 21:47:28.080513] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:27.341 [2024-12-10 21:47:28.080684] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:20:27.341 [2024-12-10 21:47:28.080703] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:20:27.341 request: 00:20:27.341 { 00:20:27.341 "base_bdev": "BaseBdev1", 00:20:27.341 "raid_bdev": "raid_bdev1", 00:20:27.341 "method": "bdev_raid_add_base_bdev", 00:20:27.341 "req_id": 1 00:20:27.341 } 00:20:27.341 Got JSON-RPC error response 00:20:27.341 response: 00:20:27.341 { 00:20:27.341 "code": -22, 00:20:27.341 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:20:27.341 } 00:20:27.341 21:47:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:20:27.341 21:47:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@655 -- # es=1 00:20:27.341 21:47:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:27.341 21:47:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:27.341 21:47:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:27.341 21:47:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@779 -- # sleep 1 00:20:28.719 21:47:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:28.719 21:47:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:28.719 21:47:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:28.719 21:47:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:28.719 21:47:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:28.719 21:47:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:28.719 21:47:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:28.719 21:47:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:28.719 21:47:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:28.719 21:47:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:28.719 21:47:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:28.719 21:47:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:28.719 21:47:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.719 21:47:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:28.719 21:47:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.719 21:47:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:28.719 "name": "raid_bdev1", 00:20:28.719 "uuid": "74761ef9-46a3-4cb3-92d9-685be775e0d9", 00:20:28.719 "strip_size_kb": 0, 00:20:28.719 "state": "online", 00:20:28.719 "raid_level": "raid1", 00:20:28.719 "superblock": true, 00:20:28.719 "num_base_bdevs": 2, 00:20:28.719 "num_base_bdevs_discovered": 1, 00:20:28.719 "num_base_bdevs_operational": 1, 00:20:28.719 "base_bdevs_list": [ 00:20:28.719 { 00:20:28.719 "name": null, 00:20:28.719 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:28.719 "is_configured": false, 00:20:28.719 "data_offset": 0, 00:20:28.719 "data_size": 7936 00:20:28.719 }, 00:20:28.719 { 00:20:28.719 "name": "BaseBdev2", 00:20:28.719 "uuid": "36581ab8-8ab0-51b6-838c-80258cbb6039", 00:20:28.719 "is_configured": true, 00:20:28.719 "data_offset": 256, 00:20:28.719 "data_size": 7936 00:20:28.719 } 00:20:28.719 ] 00:20:28.719 }' 00:20:28.719 21:47:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:28.719 21:47:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:28.979 21:47:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:28.979 21:47:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:28.979 21:47:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:28.979 21:47:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:28.979 21:47:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:28.979 21:47:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:28.979 21:47:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:28.979 21:47:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.979 21:47:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:28.979 21:47:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.979 21:47:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:28.979 "name": "raid_bdev1", 00:20:28.979 "uuid": "74761ef9-46a3-4cb3-92d9-685be775e0d9", 00:20:28.979 "strip_size_kb": 0, 00:20:28.979 "state": "online", 00:20:28.979 "raid_level": "raid1", 00:20:28.979 "superblock": true, 00:20:28.979 "num_base_bdevs": 2, 00:20:28.979 "num_base_bdevs_discovered": 1, 00:20:28.979 "num_base_bdevs_operational": 1, 00:20:28.979 "base_bdevs_list": [ 00:20:28.979 { 00:20:28.979 "name": null, 00:20:28.979 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:28.979 "is_configured": false, 00:20:28.979 "data_offset": 0, 00:20:28.979 "data_size": 7936 00:20:28.979 }, 00:20:28.979 { 00:20:28.979 "name": "BaseBdev2", 00:20:28.979 "uuid": "36581ab8-8ab0-51b6-838c-80258cbb6039", 00:20:28.979 "is_configured": true, 00:20:28.979 "data_offset": 256, 00:20:28.979 "data_size": 7936 00:20:28.979 } 00:20:28.979 ] 00:20:28.979 }' 00:20:28.980 21:47:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:28.980 21:47:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:28.980 21:47:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:28.980 21:47:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:28.980 21:47:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@784 -- # killprocess 89195 00:20:28.980 21:47:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 89195 ']' 00:20:28.980 21:47:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 89195 00:20:28.980 21:47:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:20:28.980 21:47:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:28.980 21:47:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89195 00:20:28.980 killing process with pid 89195 00:20:28.980 Received shutdown signal, test time was about 60.000000 seconds 00:20:28.980 00:20:28.980 Latency(us) 00:20:28.980 [2024-12-10T21:47:29.763Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:28.980 [2024-12-10T21:47:29.763Z] =================================================================================================================== 00:20:28.980 [2024-12-10T21:47:29.763Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:28.980 21:47:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:28.980 21:47:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:28.980 21:47:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89195' 00:20:28.980 21:47:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@973 -- # kill 89195 00:20:28.980 [2024-12-10 21:47:29.700813] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:28.980 [2024-12-10 21:47:29.700935] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:28.980 21:47:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@978 -- # wait 89195 00:20:28.980 [2024-12-10 21:47:29.700984] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:28.980 [2024-12-10 21:47:29.700996] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:20:29.239 [2024-12-10 21:47:29.998938] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:30.620 21:47:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@786 -- # return 0 00:20:30.620 00:20:30.620 real 0m17.389s 00:20:30.620 user 0m22.788s 00:20:30.620 sys 0m1.560s 00:20:30.620 21:47:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:30.620 ************************************ 00:20:30.620 END TEST raid_rebuild_test_sb_md_interleaved 00:20:30.620 ************************************ 00:20:30.620 21:47:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:30.620 21:47:31 bdev_raid -- bdev/bdev_raid.sh@1015 -- # trap - EXIT 00:20:30.620 21:47:31 bdev_raid -- bdev/bdev_raid.sh@1016 -- # cleanup 00:20:30.620 21:47:31 bdev_raid -- bdev/bdev_raid.sh@56 -- # '[' -n 89195 ']' 00:20:30.620 21:47:31 bdev_raid -- bdev/bdev_raid.sh@56 -- # ps -p 89195 00:20:30.620 21:47:31 bdev_raid -- bdev/bdev_raid.sh@60 -- # rm -rf /raidtest 00:20:30.620 00:20:30.620 real 12m12.213s 00:20:30.620 user 16m32.223s 00:20:30.620 sys 1m51.149s 00:20:30.620 21:47:31 bdev_raid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:30.620 ************************************ 00:20:30.620 END TEST bdev_raid 00:20:30.620 ************************************ 00:20:30.620 21:47:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:30.620 21:47:31 -- spdk/autotest.sh@190 -- # run_test spdkcli_raid /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:20:30.620 21:47:31 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:30.620 21:47:31 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:30.620 21:47:31 -- common/autotest_common.sh@10 -- # set +x 00:20:30.620 ************************************ 00:20:30.620 START TEST spdkcli_raid 00:20:30.620 ************************************ 00:20:30.620 21:47:31 spdkcli_raid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:20:30.620 * Looking for test storage... 00:20:30.620 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:20:30.620 21:47:31 spdkcli_raid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:20:30.620 21:47:31 spdkcli_raid -- common/autotest_common.sh@1711 -- # lcov --version 00:20:30.620 21:47:31 spdkcli_raid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:20:30.880 21:47:31 spdkcli_raid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:20:30.880 21:47:31 spdkcli_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:30.880 21:47:31 spdkcli_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:30.880 21:47:31 spdkcli_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:30.880 21:47:31 spdkcli_raid -- scripts/common.sh@336 -- # IFS=.-: 00:20:30.880 21:47:31 spdkcli_raid -- scripts/common.sh@336 -- # read -ra ver1 00:20:30.880 21:47:31 spdkcli_raid -- scripts/common.sh@337 -- # IFS=.-: 00:20:30.880 21:47:31 spdkcli_raid -- scripts/common.sh@337 -- # read -ra ver2 00:20:30.880 21:47:31 spdkcli_raid -- scripts/common.sh@338 -- # local 'op=<' 00:20:30.880 21:47:31 spdkcli_raid -- scripts/common.sh@340 -- # ver1_l=2 00:20:30.880 21:47:31 spdkcli_raid -- scripts/common.sh@341 -- # ver2_l=1 00:20:30.880 21:47:31 spdkcli_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:30.880 21:47:31 spdkcli_raid -- scripts/common.sh@344 -- # case "$op" in 00:20:30.880 21:47:31 spdkcli_raid -- scripts/common.sh@345 -- # : 1 00:20:30.880 21:47:31 spdkcli_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:30.880 21:47:31 spdkcli_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:30.880 21:47:31 spdkcli_raid -- scripts/common.sh@365 -- # decimal 1 00:20:30.880 21:47:31 spdkcli_raid -- scripts/common.sh@353 -- # local d=1 00:20:30.880 21:47:31 spdkcli_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:30.880 21:47:31 spdkcli_raid -- scripts/common.sh@355 -- # echo 1 00:20:30.880 21:47:31 spdkcli_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:20:30.880 21:47:31 spdkcli_raid -- scripts/common.sh@366 -- # decimal 2 00:20:30.880 21:47:31 spdkcli_raid -- scripts/common.sh@353 -- # local d=2 00:20:30.880 21:47:31 spdkcli_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:30.880 21:47:31 spdkcli_raid -- scripts/common.sh@355 -- # echo 2 00:20:30.880 21:47:31 spdkcli_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:20:30.880 21:47:31 spdkcli_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:30.880 21:47:31 spdkcli_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:30.880 21:47:31 spdkcli_raid -- scripts/common.sh@368 -- # return 0 00:20:30.880 21:47:31 spdkcli_raid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:30.880 21:47:31 spdkcli_raid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:20:30.880 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:30.880 --rc genhtml_branch_coverage=1 00:20:30.880 --rc genhtml_function_coverage=1 00:20:30.880 --rc genhtml_legend=1 00:20:30.880 --rc geninfo_all_blocks=1 00:20:30.880 --rc geninfo_unexecuted_blocks=1 00:20:30.880 00:20:30.880 ' 00:20:30.880 21:47:31 spdkcli_raid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:20:30.880 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:30.880 --rc genhtml_branch_coverage=1 00:20:30.880 --rc genhtml_function_coverage=1 00:20:30.880 --rc genhtml_legend=1 00:20:30.880 --rc geninfo_all_blocks=1 00:20:30.880 --rc geninfo_unexecuted_blocks=1 00:20:30.880 00:20:30.880 ' 00:20:30.880 21:47:31 spdkcli_raid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:20:30.880 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:30.880 --rc genhtml_branch_coverage=1 00:20:30.880 --rc genhtml_function_coverage=1 00:20:30.880 --rc genhtml_legend=1 00:20:30.880 --rc geninfo_all_blocks=1 00:20:30.880 --rc geninfo_unexecuted_blocks=1 00:20:30.880 00:20:30.880 ' 00:20:30.880 21:47:31 spdkcli_raid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:20:30.880 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:30.880 --rc genhtml_branch_coverage=1 00:20:30.880 --rc genhtml_function_coverage=1 00:20:30.880 --rc genhtml_legend=1 00:20:30.880 --rc geninfo_all_blocks=1 00:20:30.880 --rc geninfo_unexecuted_blocks=1 00:20:30.880 00:20:30.880 ' 00:20:30.880 21:47:31 spdkcli_raid -- spdkcli/raid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:20:30.880 21:47:31 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:20:30.880 21:47:31 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:20:30.880 21:47:31 spdkcli_raid -- spdkcli/raid.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:20:30.880 21:47:31 spdkcli_raid -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:20:30.880 21:47:31 spdkcli_raid -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:20:30.880 21:47:31 spdkcli_raid -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:20:30.880 21:47:31 spdkcli_raid -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:20:30.880 21:47:31 spdkcli_raid -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:20:30.880 21:47:31 spdkcli_raid -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:20:30.880 21:47:31 spdkcli_raid -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:20:30.880 21:47:31 spdkcli_raid -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:20:30.880 21:47:31 spdkcli_raid -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:20:30.880 21:47:31 spdkcli_raid -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:20:30.880 21:47:31 spdkcli_raid -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:20:30.880 21:47:31 spdkcli_raid -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:20:30.880 21:47:31 spdkcli_raid -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:20:30.880 21:47:31 spdkcli_raid -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:20:30.880 21:47:31 spdkcli_raid -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:20:30.880 21:47:31 spdkcli_raid -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:20:30.880 21:47:31 spdkcli_raid -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:20:30.880 21:47:31 spdkcli_raid -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:20:30.880 21:47:31 spdkcli_raid -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:20:30.880 21:47:31 spdkcli_raid -- spdkcli/raid.sh@12 -- # MATCH_FILE=spdkcli_raid.test 00:20:30.880 21:47:31 spdkcli_raid -- spdkcli/raid.sh@13 -- # SPDKCLI_BRANCH=/bdevs 00:20:30.880 21:47:31 spdkcli_raid -- spdkcli/raid.sh@14 -- # dirname /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:20:30.880 21:47:31 spdkcli_raid -- spdkcli/raid.sh@14 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/spdkcli 00:20:30.881 21:47:31 spdkcli_raid -- spdkcli/raid.sh@14 -- # testdir=/home/vagrant/spdk_repo/spdk/test/spdkcli 00:20:30.881 21:47:31 spdkcli_raid -- spdkcli/raid.sh@15 -- # . /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:20:30.881 21:47:31 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:20:30.881 21:47:31 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:20:30.881 21:47:31 spdkcli_raid -- spdkcli/raid.sh@17 -- # trap cleanup EXIT 00:20:30.881 21:47:31 spdkcli_raid -- spdkcli/raid.sh@19 -- # timing_enter run_spdk_tgt 00:20:30.881 21:47:31 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:30.881 21:47:31 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:20:30.881 21:47:31 spdkcli_raid -- spdkcli/raid.sh@20 -- # run_spdk_tgt 00:20:30.881 21:47:31 spdkcli_raid -- spdkcli/common.sh@27 -- # spdk_tgt_pid=89872 00:20:30.881 21:47:31 spdkcli_raid -- spdkcli/common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:20:30.881 21:47:31 spdkcli_raid -- spdkcli/common.sh@28 -- # waitforlisten 89872 00:20:30.881 21:47:31 spdkcli_raid -- common/autotest_common.sh@835 -- # '[' -z 89872 ']' 00:20:30.881 21:47:31 spdkcli_raid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:30.881 21:47:31 spdkcli_raid -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:30.881 21:47:31 spdkcli_raid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:30.881 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:30.881 21:47:31 spdkcli_raid -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:30.881 21:47:31 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:20:30.881 [2024-12-10 21:47:31.574382] Starting SPDK v25.01-pre git sha1 cec5ba284 / DPDK 24.03.0 initialization... 00:20:30.881 [2024-12-10 21:47:31.574615] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89872 ] 00:20:31.140 [2024-12-10 21:47:31.747153] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:20:31.140 [2024-12-10 21:47:31.855000] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:20:31.140 [2024-12-10 21:47:31.855039] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:20:32.079 21:47:32 spdkcli_raid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:32.079 21:47:32 spdkcli_raid -- common/autotest_common.sh@868 -- # return 0 00:20:32.079 21:47:32 spdkcli_raid -- spdkcli/raid.sh@21 -- # timing_exit run_spdk_tgt 00:20:32.079 21:47:32 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:32.079 21:47:32 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:20:32.079 21:47:32 spdkcli_raid -- spdkcli/raid.sh@23 -- # timing_enter spdkcli_create_malloc 00:20:32.079 21:47:32 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:32.079 21:47:32 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:20:32.079 21:47:32 spdkcli_raid -- spdkcli/raid.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 8 512 Malloc1'\'' '\''Malloc1'\'' True 00:20:32.079 '\''/bdevs/malloc create 8 512 Malloc2'\'' '\''Malloc2'\'' True 00:20:32.079 ' 00:20:33.457 Executing command: ['/bdevs/malloc create 8 512 Malloc1', 'Malloc1', True] 00:20:33.457 Executing command: ['/bdevs/malloc create 8 512 Malloc2', 'Malloc2', True] 00:20:33.717 21:47:34 spdkcli_raid -- spdkcli/raid.sh@27 -- # timing_exit spdkcli_create_malloc 00:20:33.717 21:47:34 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:33.717 21:47:34 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:20:33.717 21:47:34 spdkcli_raid -- spdkcli/raid.sh@29 -- # timing_enter spdkcli_create_raid 00:20:33.717 21:47:34 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:33.717 21:47:34 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:20:33.717 21:47:34 spdkcli_raid -- spdkcli/raid.sh@31 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4'\'' '\''testraid'\'' True 00:20:33.717 ' 00:20:35.096 Executing command: ['/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4', 'testraid', True] 00:20:35.096 21:47:35 spdkcli_raid -- spdkcli/raid.sh@32 -- # timing_exit spdkcli_create_raid 00:20:35.097 21:47:35 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:35.097 21:47:35 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:20:35.097 21:47:35 spdkcli_raid -- spdkcli/raid.sh@34 -- # timing_enter spdkcli_check_match 00:20:35.097 21:47:35 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:35.097 21:47:35 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:20:35.097 21:47:35 spdkcli_raid -- spdkcli/raid.sh@35 -- # check_match 00:20:35.097 21:47:35 spdkcli_raid -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /bdevs 00:20:35.356 21:47:36 spdkcli_raid -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test.match 00:20:35.615 21:47:36 spdkcli_raid -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test 00:20:35.615 21:47:36 spdkcli_raid -- spdkcli/raid.sh@36 -- # timing_exit spdkcli_check_match 00:20:35.615 21:47:36 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:35.615 21:47:36 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:20:35.615 21:47:36 spdkcli_raid -- spdkcli/raid.sh@38 -- # timing_enter spdkcli_delete_raid 00:20:35.615 21:47:36 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:35.615 21:47:36 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:20:35.615 21:47:36 spdkcli_raid -- spdkcli/raid.sh@40 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume delete testraid'\'' '\'''\'' True 00:20:35.615 ' 00:20:36.553 Executing command: ['/bdevs/raid_volume delete testraid', '', True] 00:20:36.553 21:47:37 spdkcli_raid -- spdkcli/raid.sh@41 -- # timing_exit spdkcli_delete_raid 00:20:36.553 21:47:37 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:36.553 21:47:37 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:20:36.813 21:47:37 spdkcli_raid -- spdkcli/raid.sh@43 -- # timing_enter spdkcli_delete_malloc 00:20:36.813 21:47:37 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:36.813 21:47:37 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:20:36.813 21:47:37 spdkcli_raid -- spdkcli/raid.sh@46 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc delete Malloc1'\'' '\'''\'' True 00:20:36.813 '\''/bdevs/malloc delete Malloc2'\'' '\'''\'' True 00:20:36.813 ' 00:20:38.210 Executing command: ['/bdevs/malloc delete Malloc1', '', True] 00:20:38.210 Executing command: ['/bdevs/malloc delete Malloc2', '', True] 00:20:38.210 21:47:38 spdkcli_raid -- spdkcli/raid.sh@47 -- # timing_exit spdkcli_delete_malloc 00:20:38.210 21:47:38 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:38.210 21:47:38 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:20:38.210 21:47:38 spdkcli_raid -- spdkcli/raid.sh@49 -- # killprocess 89872 00:20:38.210 21:47:38 spdkcli_raid -- common/autotest_common.sh@954 -- # '[' -z 89872 ']' 00:20:38.210 21:47:38 spdkcli_raid -- common/autotest_common.sh@958 -- # kill -0 89872 00:20:38.210 21:47:38 spdkcli_raid -- common/autotest_common.sh@959 -- # uname 00:20:38.210 21:47:38 spdkcli_raid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:38.210 21:47:38 spdkcli_raid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89872 00:20:38.210 21:47:38 spdkcli_raid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:38.210 21:47:38 spdkcli_raid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:38.210 21:47:38 spdkcli_raid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89872' 00:20:38.210 killing process with pid 89872 00:20:38.210 21:47:38 spdkcli_raid -- common/autotest_common.sh@973 -- # kill 89872 00:20:38.210 21:47:38 spdkcli_raid -- common/autotest_common.sh@978 -- # wait 89872 00:20:40.758 21:47:41 spdkcli_raid -- spdkcli/raid.sh@1 -- # cleanup 00:20:40.758 21:47:41 spdkcli_raid -- spdkcli/common.sh@10 -- # '[' -n 89872 ']' 00:20:40.758 21:47:41 spdkcli_raid -- spdkcli/common.sh@11 -- # killprocess 89872 00:20:40.758 21:47:41 spdkcli_raid -- common/autotest_common.sh@954 -- # '[' -z 89872 ']' 00:20:40.758 21:47:41 spdkcli_raid -- common/autotest_common.sh@958 -- # kill -0 89872 00:20:40.758 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (89872) - No such process 00:20:40.758 21:47:41 spdkcli_raid -- common/autotest_common.sh@981 -- # echo 'Process with pid 89872 is not found' 00:20:40.758 Process with pid 89872 is not found 00:20:40.758 21:47:41 spdkcli_raid -- spdkcli/common.sh@13 -- # '[' -n '' ']' 00:20:40.758 21:47:41 spdkcli_raid -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:20:40.758 21:47:41 spdkcli_raid -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:20:40.758 21:47:41 spdkcli_raid -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_raid.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:20:40.758 00:20:40.758 real 0m10.108s 00:20:40.758 user 0m20.866s 00:20:40.758 sys 0m1.108s 00:20:40.758 21:47:41 spdkcli_raid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:40.758 21:47:41 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:20:40.758 ************************************ 00:20:40.758 END TEST spdkcli_raid 00:20:40.758 ************************************ 00:20:40.759 21:47:41 -- spdk/autotest.sh@191 -- # run_test blockdev_raid5f /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:20:40.759 21:47:41 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:40.759 21:47:41 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:40.759 21:47:41 -- common/autotest_common.sh@10 -- # set +x 00:20:40.759 ************************************ 00:20:40.759 START TEST blockdev_raid5f 00:20:40.759 ************************************ 00:20:40.759 21:47:41 blockdev_raid5f -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:20:40.759 * Looking for test storage... 00:20:40.759 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:20:40.759 21:47:41 blockdev_raid5f -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:20:40.759 21:47:41 blockdev_raid5f -- common/autotest_common.sh@1711 -- # lcov --version 00:20:40.759 21:47:41 blockdev_raid5f -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:20:41.019 21:47:41 blockdev_raid5f -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:20:41.019 21:47:41 blockdev_raid5f -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:41.019 21:47:41 blockdev_raid5f -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:41.019 21:47:41 blockdev_raid5f -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:41.019 21:47:41 blockdev_raid5f -- scripts/common.sh@336 -- # IFS=.-: 00:20:41.019 21:47:41 blockdev_raid5f -- scripts/common.sh@336 -- # read -ra ver1 00:20:41.019 21:47:41 blockdev_raid5f -- scripts/common.sh@337 -- # IFS=.-: 00:20:41.019 21:47:41 blockdev_raid5f -- scripts/common.sh@337 -- # read -ra ver2 00:20:41.019 21:47:41 blockdev_raid5f -- scripts/common.sh@338 -- # local 'op=<' 00:20:41.019 21:47:41 blockdev_raid5f -- scripts/common.sh@340 -- # ver1_l=2 00:20:41.019 21:47:41 blockdev_raid5f -- scripts/common.sh@341 -- # ver2_l=1 00:20:41.019 21:47:41 blockdev_raid5f -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:41.019 21:47:41 blockdev_raid5f -- scripts/common.sh@344 -- # case "$op" in 00:20:41.019 21:47:41 blockdev_raid5f -- scripts/common.sh@345 -- # : 1 00:20:41.019 21:47:41 blockdev_raid5f -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:41.019 21:47:41 blockdev_raid5f -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:41.019 21:47:41 blockdev_raid5f -- scripts/common.sh@365 -- # decimal 1 00:20:41.019 21:47:41 blockdev_raid5f -- scripts/common.sh@353 -- # local d=1 00:20:41.019 21:47:41 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:41.019 21:47:41 blockdev_raid5f -- scripts/common.sh@355 -- # echo 1 00:20:41.019 21:47:41 blockdev_raid5f -- scripts/common.sh@365 -- # ver1[v]=1 00:20:41.019 21:47:41 blockdev_raid5f -- scripts/common.sh@366 -- # decimal 2 00:20:41.019 21:47:41 blockdev_raid5f -- scripts/common.sh@353 -- # local d=2 00:20:41.019 21:47:41 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:41.019 21:47:41 blockdev_raid5f -- scripts/common.sh@355 -- # echo 2 00:20:41.019 21:47:41 blockdev_raid5f -- scripts/common.sh@366 -- # ver2[v]=2 00:20:41.019 21:47:41 blockdev_raid5f -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:41.019 21:47:41 blockdev_raid5f -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:41.019 21:47:41 blockdev_raid5f -- scripts/common.sh@368 -- # return 0 00:20:41.019 21:47:41 blockdev_raid5f -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:41.019 21:47:41 blockdev_raid5f -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:20:41.019 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:41.019 --rc genhtml_branch_coverage=1 00:20:41.019 --rc genhtml_function_coverage=1 00:20:41.019 --rc genhtml_legend=1 00:20:41.019 --rc geninfo_all_blocks=1 00:20:41.019 --rc geninfo_unexecuted_blocks=1 00:20:41.019 00:20:41.019 ' 00:20:41.019 21:47:41 blockdev_raid5f -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:20:41.019 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:41.019 --rc genhtml_branch_coverage=1 00:20:41.019 --rc genhtml_function_coverage=1 00:20:41.019 --rc genhtml_legend=1 00:20:41.019 --rc geninfo_all_blocks=1 00:20:41.019 --rc geninfo_unexecuted_blocks=1 00:20:41.019 00:20:41.019 ' 00:20:41.019 21:47:41 blockdev_raid5f -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:20:41.019 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:41.019 --rc genhtml_branch_coverage=1 00:20:41.019 --rc genhtml_function_coverage=1 00:20:41.019 --rc genhtml_legend=1 00:20:41.019 --rc geninfo_all_blocks=1 00:20:41.019 --rc geninfo_unexecuted_blocks=1 00:20:41.019 00:20:41.019 ' 00:20:41.019 21:47:41 blockdev_raid5f -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:20:41.019 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:41.019 --rc genhtml_branch_coverage=1 00:20:41.019 --rc genhtml_function_coverage=1 00:20:41.019 --rc genhtml_legend=1 00:20:41.019 --rc geninfo_all_blocks=1 00:20:41.019 --rc geninfo_unexecuted_blocks=1 00:20:41.019 00:20:41.019 ' 00:20:41.019 21:47:41 blockdev_raid5f -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:20:41.019 21:47:41 blockdev_raid5f -- bdev/nbd_common.sh@6 -- # set -e 00:20:41.019 21:47:41 blockdev_raid5f -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:20:41.019 21:47:41 blockdev_raid5f -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:20:41.019 21:47:41 blockdev_raid5f -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:20:41.019 21:47:41 blockdev_raid5f -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:20:41.019 21:47:41 blockdev_raid5f -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:20:41.019 21:47:41 blockdev_raid5f -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:20:41.019 21:47:41 blockdev_raid5f -- bdev/blockdev.sh@20 -- # : 00:20:41.019 21:47:41 blockdev_raid5f -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:20:41.019 21:47:41 blockdev_raid5f -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:20:41.019 21:47:41 blockdev_raid5f -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:20:41.019 21:47:41 blockdev_raid5f -- bdev/blockdev.sh@711 -- # uname -s 00:20:41.019 21:47:41 blockdev_raid5f -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:20:41.019 21:47:41 blockdev_raid5f -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:20:41.019 21:47:41 blockdev_raid5f -- bdev/blockdev.sh@719 -- # test_type=raid5f 00:20:41.019 21:47:41 blockdev_raid5f -- bdev/blockdev.sh@720 -- # crypto_device= 00:20:41.019 21:47:41 blockdev_raid5f -- bdev/blockdev.sh@721 -- # dek= 00:20:41.019 21:47:41 blockdev_raid5f -- bdev/blockdev.sh@722 -- # env_ctx= 00:20:41.019 21:47:41 blockdev_raid5f -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:20:41.019 21:47:41 blockdev_raid5f -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:20:41.019 21:47:41 blockdev_raid5f -- bdev/blockdev.sh@727 -- # [[ raid5f == bdev ]] 00:20:41.019 21:47:41 blockdev_raid5f -- bdev/blockdev.sh@727 -- # [[ raid5f == crypto_* ]] 00:20:41.019 21:47:41 blockdev_raid5f -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:20:41.019 21:47:41 blockdev_raid5f -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=90146 00:20:41.019 21:47:41 blockdev_raid5f -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:20:41.019 21:47:41 blockdev_raid5f -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:20:41.019 21:47:41 blockdev_raid5f -- bdev/blockdev.sh@49 -- # waitforlisten 90146 00:20:41.019 21:47:41 blockdev_raid5f -- common/autotest_common.sh@835 -- # '[' -z 90146 ']' 00:20:41.019 21:47:41 blockdev_raid5f -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:41.019 21:47:41 blockdev_raid5f -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:41.019 21:47:41 blockdev_raid5f -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:41.019 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:41.019 21:47:41 blockdev_raid5f -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:41.019 21:47:41 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:41.019 [2024-12-10 21:47:41.740048] Starting SPDK v25.01-pre git sha1 cec5ba284 / DPDK 24.03.0 initialization... 00:20:41.020 [2024-12-10 21:47:41.740231] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90146 ] 00:20:41.279 [2024-12-10 21:47:41.915709] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:41.279 [2024-12-10 21:47:42.024360] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:20:42.219 21:47:42 blockdev_raid5f -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:42.219 21:47:42 blockdev_raid5f -- common/autotest_common.sh@868 -- # return 0 00:20:42.219 21:47:42 blockdev_raid5f -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:20:42.219 21:47:42 blockdev_raid5f -- bdev/blockdev.sh@763 -- # setup_raid5f_conf 00:20:42.219 21:47:42 blockdev_raid5f -- bdev/blockdev.sh@279 -- # rpc_cmd 00:20:42.219 21:47:42 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.219 21:47:42 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:42.219 Malloc0 00:20:42.219 Malloc1 00:20:42.219 Malloc2 00:20:42.219 21:47:42 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.219 21:47:42 blockdev_raid5f -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:20:42.219 21:47:42 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.219 21:47:42 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:42.219 21:47:42 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.219 21:47:42 blockdev_raid5f -- bdev/blockdev.sh@777 -- # cat 00:20:42.219 21:47:42 blockdev_raid5f -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:20:42.219 21:47:42 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.219 21:47:42 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:42.480 21:47:43 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.480 21:47:43 blockdev_raid5f -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:20:42.480 21:47:43 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.480 21:47:43 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:42.480 21:47:43 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.480 21:47:43 blockdev_raid5f -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:20:42.480 21:47:43 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.480 21:47:43 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:42.480 21:47:43 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.480 21:47:43 blockdev_raid5f -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:20:42.480 21:47:43 blockdev_raid5f -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:20:42.480 21:47:43 blockdev_raid5f -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:20:42.480 21:47:43 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.480 21:47:43 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:42.480 21:47:43 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.480 21:47:43 blockdev_raid5f -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:20:42.480 21:47:43 blockdev_raid5f -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "bdc96311-f4d7-45e1-9503-45a439749296"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "bdc96311-f4d7-45e1-9503-45a439749296",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "bdc96311-f4d7-45e1-9503-45a439749296",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "b4f38ac9-c62b-4e07-ba05-ede89edc49d0",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "2ec81ea5-e474-4594-9c47-e4425ac34f4d",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "32f85b13-8530-4a30-a971-00d2d6e4bd17",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:20:42.480 21:47:43 blockdev_raid5f -- bdev/blockdev.sh@786 -- # jq -r .name 00:20:42.480 21:47:43 blockdev_raid5f -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:20:42.480 21:47:43 blockdev_raid5f -- bdev/blockdev.sh@789 -- # hello_world_bdev=raid5f 00:20:42.480 21:47:43 blockdev_raid5f -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:20:42.480 21:47:43 blockdev_raid5f -- bdev/blockdev.sh@791 -- # killprocess 90146 00:20:42.480 21:47:43 blockdev_raid5f -- common/autotest_common.sh@954 -- # '[' -z 90146 ']' 00:20:42.480 21:47:43 blockdev_raid5f -- common/autotest_common.sh@958 -- # kill -0 90146 00:20:42.480 21:47:43 blockdev_raid5f -- common/autotest_common.sh@959 -- # uname 00:20:42.480 21:47:43 blockdev_raid5f -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:42.480 21:47:43 blockdev_raid5f -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90146 00:20:42.480 killing process with pid 90146 00:20:42.480 21:47:43 blockdev_raid5f -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:42.480 21:47:43 blockdev_raid5f -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:42.480 21:47:43 blockdev_raid5f -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90146' 00:20:42.480 21:47:43 blockdev_raid5f -- common/autotest_common.sh@973 -- # kill 90146 00:20:42.480 21:47:43 blockdev_raid5f -- common/autotest_common.sh@978 -- # wait 90146 00:20:45.772 21:47:45 blockdev_raid5f -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:20:45.772 21:47:45 blockdev_raid5f -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:20:45.772 21:47:45 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:20:45.772 21:47:45 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:45.772 21:47:45 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:45.772 ************************************ 00:20:45.772 START TEST bdev_hello_world 00:20:45.772 ************************************ 00:20:45.772 21:47:45 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:20:45.772 [2024-12-10 21:47:45.935567] Starting SPDK v25.01-pre git sha1 cec5ba284 / DPDK 24.03.0 initialization... 00:20:45.772 [2024-12-10 21:47:45.935702] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90219 ] 00:20:45.772 [2024-12-10 21:47:46.107371] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:45.772 [2024-12-10 21:47:46.212329] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:20:46.031 [2024-12-10 21:47:46.719170] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:20:46.031 [2024-12-10 21:47:46.719225] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev raid5f 00:20:46.031 [2024-12-10 21:47:46.719242] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:20:46.031 [2024-12-10 21:47:46.719723] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:20:46.031 [2024-12-10 21:47:46.719886] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:20:46.031 [2024-12-10 21:47:46.719905] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:20:46.031 [2024-12-10 21:47:46.719965] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:20:46.031 00:20:46.031 [2024-12-10 21:47:46.719982] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:20:47.411 00:20:47.411 real 0m2.252s 00:20:47.411 user 0m1.896s 00:20:47.411 sys 0m0.233s 00:20:47.411 ************************************ 00:20:47.411 END TEST bdev_hello_world 00:20:47.411 ************************************ 00:20:47.411 21:47:48 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:47.411 21:47:48 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:20:47.411 21:47:48 blockdev_raid5f -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:20:47.411 21:47:48 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:47.411 21:47:48 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:47.411 21:47:48 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:47.411 ************************************ 00:20:47.411 START TEST bdev_bounds 00:20:47.411 ************************************ 00:20:47.411 21:47:48 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:20:47.411 21:47:48 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=90261 00:20:47.411 21:47:48 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:20:47.411 21:47:48 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:20:47.411 21:47:48 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 90261' 00:20:47.411 Process bdevio pid: 90261 00:20:47.411 21:47:48 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 90261 00:20:47.411 21:47:48 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 90261 ']' 00:20:47.411 21:47:48 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:47.411 21:47:48 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:47.411 21:47:48 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:47.411 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:47.411 21:47:48 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:47.411 21:47:48 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:20:47.671 [2024-12-10 21:47:48.254215] Starting SPDK v25.01-pre git sha1 cec5ba284 / DPDK 24.03.0 initialization... 00:20:47.671 [2024-12-10 21:47:48.254332] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90261 ] 00:20:47.671 [2024-12-10 21:47:48.428740] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:47.930 [2024-12-10 21:47:48.542399] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:20:47.930 [2024-12-10 21:47:48.542542] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:20:47.930 [2024-12-10 21:47:48.542593] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:20:48.500 21:47:49 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:48.500 21:47:49 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:20:48.500 21:47:49 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:20:48.500 I/O targets: 00:20:48.500 raid5f: 131072 blocks of 512 bytes (64 MiB) 00:20:48.500 00:20:48.500 00:20:48.500 CUnit - A unit testing framework for C - Version 2.1-3 00:20:48.500 http://cunit.sourceforge.net/ 00:20:48.500 00:20:48.500 00:20:48.500 Suite: bdevio tests on: raid5f 00:20:48.500 Test: blockdev write read block ...passed 00:20:48.500 Test: blockdev write zeroes read block ...passed 00:20:48.500 Test: blockdev write zeroes read no split ...passed 00:20:48.760 Test: blockdev write zeroes read split ...passed 00:20:48.760 Test: blockdev write zeroes read split partial ...passed 00:20:48.760 Test: blockdev reset ...passed 00:20:48.760 Test: blockdev write read 8 blocks ...passed 00:20:48.760 Test: blockdev write read size > 128k ...passed 00:20:48.760 Test: blockdev write read invalid size ...passed 00:20:48.760 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:20:48.760 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:20:48.760 Test: blockdev write read max offset ...passed 00:20:48.760 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:20:48.760 Test: blockdev writev readv 8 blocks ...passed 00:20:48.760 Test: blockdev writev readv 30 x 1block ...passed 00:20:48.760 Test: blockdev writev readv block ...passed 00:20:48.761 Test: blockdev writev readv size > 128k ...passed 00:20:48.761 Test: blockdev writev readv size > 128k in two iovs ...passed 00:20:48.761 Test: blockdev comparev and writev ...passed 00:20:48.761 Test: blockdev nvme passthru rw ...passed 00:20:48.761 Test: blockdev nvme passthru vendor specific ...passed 00:20:48.761 Test: blockdev nvme admin passthru ...passed 00:20:48.761 Test: blockdev copy ...passed 00:20:48.761 00:20:48.761 Run Summary: Type Total Ran Passed Failed Inactive 00:20:48.761 suites 1 1 n/a 0 0 00:20:48.761 tests 23 23 23 0 0 00:20:48.761 asserts 130 130 130 0 n/a 00:20:48.761 00:20:48.761 Elapsed time = 0.625 seconds 00:20:48.761 0 00:20:48.761 21:47:49 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 90261 00:20:48.761 21:47:49 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 90261 ']' 00:20:48.761 21:47:49 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 90261 00:20:48.761 21:47:49 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:20:48.761 21:47:49 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:48.761 21:47:49 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90261 00:20:48.761 21:47:49 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:48.761 21:47:49 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:48.761 killing process with pid 90261 00:20:48.761 21:47:49 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90261' 00:20:48.761 21:47:49 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@973 -- # kill 90261 00:20:48.761 21:47:49 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@978 -- # wait 90261 00:20:50.150 21:47:50 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:20:50.150 00:20:50.150 real 0m2.720s 00:20:50.150 user 0m6.775s 00:20:50.150 sys 0m0.361s 00:20:50.150 21:47:50 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:50.150 21:47:50 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:20:50.150 ************************************ 00:20:50.150 END TEST bdev_bounds 00:20:50.150 ************************************ 00:20:50.410 21:47:50 blockdev_raid5f -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:20:50.410 21:47:50 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:20:50.410 21:47:50 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:50.410 21:47:50 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:50.410 ************************************ 00:20:50.410 START TEST bdev_nbd 00:20:50.410 ************************************ 00:20:50.410 21:47:50 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:20:50.410 21:47:50 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:20:50.410 21:47:50 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:20:50.410 21:47:50 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:50.410 21:47:50 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:20:50.410 21:47:50 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('raid5f') 00:20:50.410 21:47:50 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:20:50.410 21:47:50 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=1 00:20:50.410 21:47:50 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:20:50.410 21:47:50 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:20:50.410 21:47:50 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:20:50.410 21:47:50 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=1 00:20:50.410 21:47:50 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0') 00:20:50.410 21:47:50 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:20:50.410 21:47:50 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('raid5f') 00:20:50.410 21:47:50 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:20:50.410 21:47:50 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=90315 00:20:50.410 21:47:50 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:20:50.410 21:47:50 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:20:50.410 21:47:50 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 90315 /var/tmp/spdk-nbd.sock 00:20:50.410 21:47:50 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 90315 ']' 00:20:50.410 21:47:50 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:20:50.410 21:47:50 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:50.410 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:20:50.410 21:47:50 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:20:50.410 21:47:50 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:50.410 21:47:50 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:20:50.410 [2024-12-10 21:47:51.053434] Starting SPDK v25.01-pre git sha1 cec5ba284 / DPDK 24.03.0 initialization... 00:20:50.410 [2024-12-10 21:47:51.053553] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:50.670 [2024-12-10 21:47:51.225010] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:50.670 [2024-12-10 21:47:51.334104] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:20:51.239 21:47:51 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:51.239 21:47:51 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:20:51.239 21:47:51 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock raid5f 00:20:51.239 21:47:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:51.239 21:47:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('raid5f') 00:20:51.239 21:47:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:20:51.239 21:47:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock raid5f 00:20:51.239 21:47:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:51.239 21:47:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('raid5f') 00:20:51.239 21:47:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:20:51.239 21:47:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:20:51.239 21:47:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:20:51.239 21:47:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:20:51.239 21:47:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:20:51.239 21:47:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f 00:20:51.498 21:47:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:20:51.498 21:47:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:20:51.498 21:47:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:20:51.498 21:47:52 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:20:51.498 21:47:52 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:20:51.498 21:47:52 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:20:51.498 21:47:52 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:20:51.498 21:47:52 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:20:51.498 21:47:52 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:20:51.498 21:47:52 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:20:51.498 21:47:52 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:20:51.498 21:47:52 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:51.498 1+0 records in 00:20:51.498 1+0 records out 00:20:51.498 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000396871 s, 10.3 MB/s 00:20:51.498 21:47:52 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:51.498 21:47:52 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:20:51.498 21:47:52 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:51.498 21:47:52 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:20:51.498 21:47:52 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:20:51.498 21:47:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:20:51.498 21:47:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:20:51.498 21:47:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:20:51.757 21:47:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:20:51.757 { 00:20:51.757 "nbd_device": "/dev/nbd0", 00:20:51.757 "bdev_name": "raid5f" 00:20:51.757 } 00:20:51.757 ]' 00:20:51.757 21:47:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:20:51.757 21:47:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:20:51.757 { 00:20:51.757 "nbd_device": "/dev/nbd0", 00:20:51.757 "bdev_name": "raid5f" 00:20:51.757 } 00:20:51.757 ]' 00:20:51.757 21:47:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:20:51.757 21:47:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:20:51.757 21:47:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:51.757 21:47:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:20:51.757 21:47:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:51.757 21:47:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:20:51.757 21:47:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:51.757 21:47:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:20:52.016 21:47:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:52.016 21:47:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:52.016 21:47:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:52.016 21:47:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:52.016 21:47:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:52.016 21:47:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:52.016 21:47:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:20:52.016 21:47:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:20:52.016 21:47:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:20:52.016 21:47:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:52.016 21:47:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:20:52.276 21:47:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:20:52.276 21:47:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:20:52.276 21:47:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:20:52.276 21:47:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:20:52.276 21:47:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:20:52.276 21:47:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:20:52.276 21:47:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:20:52.276 21:47:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:20:52.276 21:47:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:20:52.276 21:47:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:20:52.276 21:47:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:20:52.276 21:47:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:20:52.276 21:47:52 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:20:52.276 21:47:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:52.276 21:47:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('raid5f') 00:20:52.276 21:47:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:20:52.276 21:47:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0') 00:20:52.276 21:47:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:20:52.276 21:47:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:20:52.276 21:47:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:52.276 21:47:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('raid5f') 00:20:52.276 21:47:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:52.276 21:47:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:20:52.276 21:47:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:52.276 21:47:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:20:52.276 21:47:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:52.276 21:47:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:52.276 21:47:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f /dev/nbd0 00:20:52.535 /dev/nbd0 00:20:52.535 21:47:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:52.535 21:47:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:52.535 21:47:53 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:20:52.535 21:47:53 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:20:52.535 21:47:53 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:20:52.535 21:47:53 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:20:52.535 21:47:53 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:20:52.535 21:47:53 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:20:52.535 21:47:53 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:20:52.535 21:47:53 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:20:52.535 21:47:53 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:52.535 1+0 records in 00:20:52.535 1+0 records out 00:20:52.535 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000307472 s, 13.3 MB/s 00:20:52.535 21:47:53 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:52.535 21:47:53 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:20:52.535 21:47:53 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:52.535 21:47:53 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:20:52.535 21:47:53 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:20:52.535 21:47:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:52.535 21:47:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:52.535 21:47:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:20:52.535 21:47:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:52.535 21:47:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:20:52.535 21:47:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:20:52.535 { 00:20:52.535 "nbd_device": "/dev/nbd0", 00:20:52.535 "bdev_name": "raid5f" 00:20:52.535 } 00:20:52.535 ]' 00:20:52.795 21:47:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:20:52.795 { 00:20:52.795 "nbd_device": "/dev/nbd0", 00:20:52.795 "bdev_name": "raid5f" 00:20:52.795 } 00:20:52.795 ]' 00:20:52.795 21:47:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:20:52.795 21:47:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:20:52.795 21:47:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:20:52.795 21:47:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:20:52.795 21:47:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=1 00:20:52.795 21:47:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 1 00:20:52.795 21:47:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=1 00:20:52.795 21:47:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 00:20:52.796 21:47:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 00:20:52.796 21:47:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:20:52.796 21:47:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:20:52.796 21:47:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:20:52.796 21:47:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:20:52.796 21:47:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:20:52.796 21:47:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:20:52.796 256+0 records in 00:20:52.796 256+0 records out 00:20:52.796 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00795126 s, 132 MB/s 00:20:52.796 21:47:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:20:52.796 21:47:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:20:52.796 256+0 records in 00:20:52.796 256+0 records out 00:20:52.796 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0297781 s, 35.2 MB/s 00:20:52.796 21:47:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 00:20:52.796 21:47:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:20:52.796 21:47:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:20:52.796 21:47:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:20:52.796 21:47:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:20:52.796 21:47:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:20:52.796 21:47:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:20:52.796 21:47:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:20:52.796 21:47:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:20:52.796 21:47:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:20:52.796 21:47:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:20:52.796 21:47:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:52.796 21:47:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:20:52.796 21:47:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:52.796 21:47:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:20:52.796 21:47:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:52.796 21:47:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:20:53.056 21:47:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:53.056 21:47:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:53.056 21:47:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:53.056 21:47:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:53.056 21:47:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:53.056 21:47:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:53.056 21:47:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:20:53.056 21:47:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:20:53.056 21:47:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:20:53.056 21:47:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:53.056 21:47:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:20:53.316 21:47:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:20:53.316 21:47:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:20:53.316 21:47:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:20:53.316 21:47:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:20:53.316 21:47:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:20:53.316 21:47:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:20:53.316 21:47:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:20:53.316 21:47:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:20:53.316 21:47:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:20:53.316 21:47:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:20:53.316 21:47:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:20:53.316 21:47:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:20:53.316 21:47:53 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:20:53.316 21:47:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:53.316 21:47:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:20:53.316 21:47:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:20:53.575 malloc_lvol_verify 00:20:53.575 21:47:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:20:53.575 1d30bbc9-e310-4516-9e24-682ccd791073 00:20:53.575 21:47:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:20:53.835 8850c891-8378-471f-8eef-f6d6e0096df8 00:20:53.835 21:47:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:20:54.095 /dev/nbd0 00:20:54.095 21:47:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:20:54.095 21:47:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:20:54.095 21:47:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:20:54.095 21:47:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:20:54.095 21:47:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:20:54.095 mke2fs 1.47.0 (5-Feb-2023) 00:20:54.095 Discarding device blocks: 0/4096 done 00:20:54.095 Creating filesystem with 4096 1k blocks and 1024 inodes 00:20:54.095 00:20:54.095 Allocating group tables: 0/1 done 00:20:54.095 Writing inode tables: 0/1 done 00:20:54.095 Creating journal (1024 blocks): done 00:20:54.095 Writing superblocks and filesystem accounting information: 0/1 done 00:20:54.095 00:20:54.095 21:47:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:20:54.095 21:47:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:54.095 21:47:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:20:54.095 21:47:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:54.095 21:47:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:20:54.095 21:47:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:54.095 21:47:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:20:54.354 21:47:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:54.354 21:47:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:54.354 21:47:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:54.354 21:47:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:54.354 21:47:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:54.354 21:47:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:54.354 21:47:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:20:54.354 21:47:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:20:54.354 21:47:54 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 90315 00:20:54.355 21:47:54 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 90315 ']' 00:20:54.355 21:47:54 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 90315 00:20:54.355 21:47:54 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:20:54.355 21:47:54 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:54.355 21:47:54 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90315 00:20:54.355 21:47:54 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:54.355 21:47:54 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:54.355 killing process with pid 90315 00:20:54.355 21:47:54 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90315' 00:20:54.355 21:47:54 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@973 -- # kill 90315 00:20:54.355 21:47:54 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@978 -- # wait 90315 00:20:55.733 21:47:56 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:20:55.733 00:20:55.733 real 0m5.498s 00:20:55.733 user 0m7.450s 00:20:55.733 sys 0m1.228s 00:20:55.733 21:47:56 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:55.733 21:47:56 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:20:55.733 ************************************ 00:20:55.733 END TEST bdev_nbd 00:20:55.733 ************************************ 00:20:55.733 21:47:56 blockdev_raid5f -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:20:55.733 21:47:56 blockdev_raid5f -- bdev/blockdev.sh@801 -- # '[' raid5f = nvme ']' 00:20:55.733 21:47:56 blockdev_raid5f -- bdev/blockdev.sh@801 -- # '[' raid5f = gpt ']' 00:20:55.733 21:47:56 blockdev_raid5f -- bdev/blockdev.sh@805 -- # run_test bdev_fio fio_test_suite '' 00:20:55.733 21:47:56 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:55.733 21:47:56 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:55.733 21:47:56 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:55.993 ************************************ 00:20:55.993 START TEST bdev_fio 00:20:55.993 ************************************ 00:20:55.993 21:47:56 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1129 -- # fio_test_suite '' 00:20:55.993 21:47:56 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:20:55.993 21:47:56 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:20:55.993 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:20:55.993 21:47:56 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:20:55.993 21:47:56 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:20:55.993 21:47:56 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:20:55.993 21:47:56 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:20:55.993 21:47:56 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:20:55.993 21:47:56 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:20:55.993 21:47:56 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=verify 00:20:55.993 21:47:56 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type=AIO 00:20:55.993 21:47:56 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:20:55.993 21:47:56 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:20:55.993 21:47:56 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:20:55.993 21:47:56 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z verify ']' 00:20:55.993 21:47:56 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:20:55.993 21:47:56 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:20:55.993 21:47:56 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:20:55.993 21:47:56 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1317 -- # '[' verify == verify ']' 00:20:55.993 21:47:56 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1318 -- # cat 00:20:55.993 21:47:56 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1327 -- # '[' AIO == AIO ']' 00:20:55.993 21:47:56 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1328 -- # /usr/src/fio/fio --version 00:20:55.993 21:47:56 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1328 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:20:55.993 21:47:56 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1329 -- # echo serialize_overlap=1 00:20:55.993 21:47:56 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:20:55.993 21:47:56 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_raid5f]' 00:20:55.993 21:47:56 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=raid5f 00:20:55.993 21:47:56 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:20:55.993 21:47:56 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:20:55.993 21:47:56 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1105 -- # '[' 11 -le 1 ']' 00:20:55.993 21:47:56 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:55.993 21:47:56 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:20:55.993 ************************************ 00:20:55.993 START TEST bdev_fio_rw_verify 00:20:55.993 ************************************ 00:20:55.993 21:47:56 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1129 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:20:55.993 21:47:56 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:20:55.993 21:47:56 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:20:55.993 21:47:56 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:55.993 21:47:56 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local sanitizers 00:20:55.993 21:47:56 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:55.993 21:47:56 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # shift 00:20:55.993 21:47:56 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # local asan_lib= 00:20:55.993 21:47:56 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:20:55.993 21:47:56 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:55.993 21:47:56 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:20:55.993 21:47:56 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # grep libasan 00:20:55.993 21:47:56 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:20:55.993 21:47:56 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:20:55.993 21:47:56 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1351 -- # break 00:20:55.993 21:47:56 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:55.993 21:47:56 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:20:56.253 job_raid5f: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:20:56.253 fio-3.35 00:20:56.253 Starting 1 thread 00:21:08.509 00:21:08.509 job_raid5f: (groupid=0, jobs=1): err= 0: pid=90518: Tue Dec 10 21:48:07 2024 00:21:08.509 read: IOPS=11.2k, BW=43.6MiB/s (45.7MB/s)(436MiB/10001msec) 00:21:08.509 slat (nsec): min=18630, max=87160, avg=21866.44, stdev=2762.84 00:21:08.509 clat (usec): min=10, max=351, avg=143.87, stdev=52.86 00:21:08.509 lat (usec): min=31, max=381, avg=165.73, stdev=53.35 00:21:08.509 clat percentiles (usec): 00:21:08.509 | 50.000th=[ 147], 99.000th=[ 255], 99.900th=[ 281], 99.990th=[ 322], 00:21:08.509 | 99.999th=[ 343] 00:21:08.509 write: IOPS=11.7k, BW=45.6MiB/s (47.8MB/s)(450MiB/9880msec); 0 zone resets 00:21:08.509 slat (usec): min=7, max=220, avg=18.12, stdev= 3.87 00:21:08.509 clat (usec): min=61, max=1017, avg=326.61, stdev=48.44 00:21:08.509 lat (usec): min=77, max=1234, avg=344.73, stdev=49.63 00:21:08.509 clat percentiles (usec): 00:21:08.509 | 50.000th=[ 330], 99.000th=[ 441], 99.900th=[ 586], 99.990th=[ 955], 00:21:08.509 | 99.999th=[ 1012] 00:21:08.509 bw ( KiB/s): min=44064, max=48944, per=99.11%, avg=46241.26, stdev=1288.68, samples=19 00:21:08.509 iops : min=11016, max=12236, avg=11560.32, stdev=322.17, samples=19 00:21:08.509 lat (usec) : 20=0.01%, 50=0.01%, 100=12.31%, 250=39.14%, 500=48.48% 00:21:08.509 lat (usec) : 750=0.06%, 1000=0.02% 00:21:08.509 lat (msec) : 2=0.01% 00:21:08.509 cpu : usr=98.92%, sys=0.45%, ctx=70, majf=0, minf=9249 00:21:08.509 IO depths : 1=7.7%, 2=19.9%, 4=55.1%, 8=17.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:08.509 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:08.509 complete : 0=0.0%, 4=90.0%, 8=10.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:08.509 issued rwts: total=111628,115244,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:08.509 latency : target=0, window=0, percentile=100.00%, depth=8 00:21:08.509 00:21:08.509 Run status group 0 (all jobs): 00:21:08.509 READ: bw=43.6MiB/s (45.7MB/s), 43.6MiB/s-43.6MiB/s (45.7MB/s-45.7MB/s), io=436MiB (457MB), run=10001-10001msec 00:21:08.509 WRITE: bw=45.6MiB/s (47.8MB/s), 45.6MiB/s-45.6MiB/s (47.8MB/s-47.8MB/s), io=450MiB (472MB), run=9880-9880msec 00:21:08.769 ----------------------------------------------------- 00:21:08.769 Suppressions used: 00:21:08.769 count bytes template 00:21:08.769 1 7 /usr/src/fio/parse.c 00:21:08.769 152 14592 /usr/src/fio/iolog.c 00:21:08.769 1 8 libtcmalloc_minimal.so 00:21:08.769 1 904 libcrypto.so 00:21:08.769 ----------------------------------------------------- 00:21:08.769 00:21:08.769 00:21:08.769 real 0m12.779s 00:21:08.769 user 0m12.941s 00:21:08.769 sys 0m0.637s 00:21:08.769 21:48:09 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:08.769 21:48:09 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:21:08.769 ************************************ 00:21:08.769 END TEST bdev_fio_rw_verify 00:21:08.769 ************************************ 00:21:08.769 21:48:09 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:21:08.769 21:48:09 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:21:08.769 21:48:09 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:21:08.769 21:48:09 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:21:08.769 21:48:09 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=trim 00:21:08.769 21:48:09 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type= 00:21:08.769 21:48:09 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:21:08.769 21:48:09 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:21:08.769 21:48:09 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:21:08.769 21:48:09 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z trim ']' 00:21:08.769 21:48:09 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:21:08.769 21:48:09 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:21:08.769 21:48:09 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:21:08.769 21:48:09 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1317 -- # '[' trim == verify ']' 00:21:08.769 21:48:09 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1332 -- # '[' trim == trim ']' 00:21:08.769 21:48:09 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1333 -- # echo rw=trimwrite 00:21:08.769 21:48:09 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "bdc96311-f4d7-45e1-9503-45a439749296"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "bdc96311-f4d7-45e1-9503-45a439749296",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "bdc96311-f4d7-45e1-9503-45a439749296",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "b4f38ac9-c62b-4e07-ba05-ede89edc49d0",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "2ec81ea5-e474-4594-9c47-e4425ac34f4d",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "32f85b13-8530-4a30-a971-00d2d6e4bd17",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:21:08.769 21:48:09 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:21:09.028 21:48:09 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:21:09.028 21:48:09 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:21:09.028 /home/vagrant/spdk_repo/spdk 00:21:09.028 21:48:09 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:21:09.028 21:48:09 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:21:09.028 21:48:09 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:21:09.028 00:21:09.028 real 0m13.059s 00:21:09.028 user 0m13.067s 00:21:09.028 sys 0m0.767s 00:21:09.028 21:48:09 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:09.028 21:48:09 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:21:09.028 ************************************ 00:21:09.028 END TEST bdev_fio 00:21:09.028 ************************************ 00:21:09.028 21:48:09 blockdev_raid5f -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:21:09.028 21:48:09 blockdev_raid5f -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:21:09.028 21:48:09 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:21:09.028 21:48:09 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:09.028 21:48:09 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:21:09.028 ************************************ 00:21:09.028 START TEST bdev_verify 00:21:09.028 ************************************ 00:21:09.028 21:48:09 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:21:09.028 [2024-12-10 21:48:09.724590] Starting SPDK v25.01-pre git sha1 cec5ba284 / DPDK 24.03.0 initialization... 00:21:09.028 [2024-12-10 21:48:09.724704] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90682 ] 00:21:09.287 [2024-12-10 21:48:09.896920] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:21:09.287 [2024-12-10 21:48:10.013103] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:21:09.287 [2024-12-10 21:48:10.013135] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:21:09.855 Running I/O for 5 seconds... 00:21:12.170 16034.00 IOPS, 62.63 MiB/s [2024-12-10T21:48:13.889Z] 15931.00 IOPS, 62.23 MiB/s [2024-12-10T21:48:14.826Z] 16079.67 IOPS, 62.81 MiB/s [2024-12-10T21:48:15.764Z] 15508.25 IOPS, 60.58 MiB/s [2024-12-10T21:48:15.764Z] 15659.40 IOPS, 61.17 MiB/s 00:21:14.981 Latency(us) 00:21:14.981 [2024-12-10T21:48:15.764Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:14.981 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:21:14.981 Verification LBA range: start 0x0 length 0x2000 00:21:14.981 raid5f : 5.01 7802.31 30.48 0.00 0.00 24647.74 1667.02 21292.05 00:21:14.981 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:14.981 Verification LBA range: start 0x2000 length 0x2000 00:21:14.981 raid5f : 5.02 7820.48 30.55 0.00 0.00 24590.89 199.43 21292.05 00:21:14.981 [2024-12-10T21:48:15.764Z] =================================================================================================================== 00:21:14.981 [2024-12-10T21:48:15.764Z] Total : 15622.79 61.03 0.00 0.00 24619.27 199.43 21292.05 00:21:16.360 00:21:16.360 real 0m7.297s 00:21:16.360 user 0m13.516s 00:21:16.360 sys 0m0.258s 00:21:16.360 21:48:16 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:16.360 21:48:16 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:21:16.360 ************************************ 00:21:16.360 END TEST bdev_verify 00:21:16.360 ************************************ 00:21:16.360 21:48:16 blockdev_raid5f -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:21:16.360 21:48:16 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:21:16.360 21:48:16 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:16.360 21:48:16 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:21:16.360 ************************************ 00:21:16.360 START TEST bdev_verify_big_io 00:21:16.360 ************************************ 00:21:16.360 21:48:17 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:21:16.360 [2024-12-10 21:48:17.083837] Starting SPDK v25.01-pre git sha1 cec5ba284 / DPDK 24.03.0 initialization... 00:21:16.360 [2024-12-10 21:48:17.083952] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90779 ] 00:21:16.619 [2024-12-10 21:48:17.256427] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:21:16.619 [2024-12-10 21:48:17.367779] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:21:16.619 [2024-12-10 21:48:17.367815] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:21:17.186 Running I/O for 5 seconds... 00:21:19.503 760.00 IOPS, 47.50 MiB/s [2024-12-10T21:48:21.223Z] 888.00 IOPS, 55.50 MiB/s [2024-12-10T21:48:22.159Z] 951.33 IOPS, 59.46 MiB/s [2024-12-10T21:48:23.091Z] 919.50 IOPS, 57.47 MiB/s [2024-12-10T21:48:23.348Z] 888.00 IOPS, 55.50 MiB/s 00:21:22.565 Latency(us) 00:21:22.565 [2024-12-10T21:48:23.348Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:22.565 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:21:22.565 Verification LBA range: start 0x0 length 0x200 00:21:22.565 raid5f : 5.27 445.33 27.83 0.00 0.00 7069863.35 230.74 382798.92 00:21:22.565 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:21:22.565 Verification LBA range: start 0x200 length 0x200 00:21:22.565 raid5f : 5.27 445.21 27.83 0.00 0.00 7087549.01 149.35 399283.09 00:21:22.565 [2024-12-10T21:48:23.348Z] =================================================================================================================== 00:21:22.565 [2024-12-10T21:48:23.348Z] Total : 890.54 55.66 0.00 0.00 7078706.18 149.35 399283.09 00:21:23.944 00:21:23.944 real 0m7.603s 00:21:23.944 user 0m14.161s 00:21:23.944 sys 0m0.238s 00:21:23.944 21:48:24 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:23.944 21:48:24 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:21:23.944 ************************************ 00:21:23.944 END TEST bdev_verify_big_io 00:21:23.944 ************************************ 00:21:23.944 21:48:24 blockdev_raid5f -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:21:23.944 21:48:24 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:21:23.944 21:48:24 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:23.944 21:48:24 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:21:23.944 ************************************ 00:21:23.944 START TEST bdev_write_zeroes 00:21:23.944 ************************************ 00:21:23.945 21:48:24 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:21:24.204 [2024-12-10 21:48:24.752632] Starting SPDK v25.01-pre git sha1 cec5ba284 / DPDK 24.03.0 initialization... 00:21:24.204 [2024-12-10 21:48:24.752752] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90873 ] 00:21:24.204 [2024-12-10 21:48:24.915515] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:24.466 [2024-12-10 21:48:25.035176] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:21:25.037 Running I/O for 1 seconds... 00:21:25.975 25911.00 IOPS, 101.21 MiB/s 00:21:25.975 Latency(us) 00:21:25.975 [2024-12-10T21:48:26.758Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:25.975 Job: raid5f (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:21:25.975 raid5f : 1.01 25891.11 101.14 0.00 0.00 4928.50 1545.39 6553.60 00:21:25.975 [2024-12-10T21:48:26.758Z] =================================================================================================================== 00:21:25.975 [2024-12-10T21:48:26.758Z] Total : 25891.11 101.14 0.00 0.00 4928.50 1545.39 6553.60 00:21:27.355 00:21:27.355 real 0m3.305s 00:21:27.355 user 0m2.936s 00:21:27.355 sys 0m0.240s 00:21:27.356 21:48:27 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:27.356 21:48:27 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:21:27.356 ************************************ 00:21:27.356 END TEST bdev_write_zeroes 00:21:27.356 ************************************ 00:21:27.356 21:48:28 blockdev_raid5f -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:21:27.356 21:48:28 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:21:27.356 21:48:28 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:27.356 21:48:28 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:21:27.356 ************************************ 00:21:27.356 START TEST bdev_json_nonenclosed 00:21:27.356 ************************************ 00:21:27.356 21:48:28 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:21:27.356 [2024-12-10 21:48:28.129214] Starting SPDK v25.01-pre git sha1 cec5ba284 / DPDK 24.03.0 initialization... 00:21:27.356 [2024-12-10 21:48:28.129326] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90932 ] 00:21:27.615 [2024-12-10 21:48:28.299770] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:27.875 [2024-12-10 21:48:28.411650] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:21:27.875 [2024-12-10 21:48:28.411744] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:21:27.875 [2024-12-10 21:48:28.411770] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:21:27.875 [2024-12-10 21:48:28.411779] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:21:28.135 00:21:28.135 real 0m0.615s 00:21:28.135 user 0m0.379s 00:21:28.135 sys 0m0.131s 00:21:28.135 21:48:28 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:28.135 21:48:28 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:21:28.135 ************************************ 00:21:28.135 END TEST bdev_json_nonenclosed 00:21:28.135 ************************************ 00:21:28.135 21:48:28 blockdev_raid5f -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:21:28.135 21:48:28 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:21:28.135 21:48:28 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:28.135 21:48:28 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:21:28.135 ************************************ 00:21:28.135 START TEST bdev_json_nonarray 00:21:28.135 ************************************ 00:21:28.135 21:48:28 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:21:28.135 [2024-12-10 21:48:28.805535] Starting SPDK v25.01-pre git sha1 cec5ba284 / DPDK 24.03.0 initialization... 00:21:28.135 [2024-12-10 21:48:28.805655] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90957 ] 00:21:28.394 [2024-12-10 21:48:28.977357] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:28.394 [2024-12-10 21:48:29.090664] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:21:28.394 [2024-12-10 21:48:29.090771] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:21:28.394 [2024-12-10 21:48:29.090789] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:21:28.394 [2024-12-10 21:48:29.090807] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:21:28.654 00:21:28.654 real 0m0.614s 00:21:28.654 user 0m0.387s 00:21:28.654 sys 0m0.123s 00:21:28.654 21:48:29 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:28.654 21:48:29 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:21:28.654 ************************************ 00:21:28.654 END TEST bdev_json_nonarray 00:21:28.654 ************************************ 00:21:28.654 21:48:29 blockdev_raid5f -- bdev/blockdev.sh@824 -- # [[ raid5f == bdev ]] 00:21:28.654 21:48:29 blockdev_raid5f -- bdev/blockdev.sh@832 -- # [[ raid5f == gpt ]] 00:21:28.654 21:48:29 blockdev_raid5f -- bdev/blockdev.sh@836 -- # [[ raid5f == crypto_sw ]] 00:21:28.654 21:48:29 blockdev_raid5f -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:21:28.654 21:48:29 blockdev_raid5f -- bdev/blockdev.sh@849 -- # cleanup 00:21:28.654 21:48:29 blockdev_raid5f -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:21:28.654 21:48:29 blockdev_raid5f -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:21:28.654 21:48:29 blockdev_raid5f -- bdev/blockdev.sh@26 -- # [[ raid5f == rbd ]] 00:21:28.654 21:48:29 blockdev_raid5f -- bdev/blockdev.sh@30 -- # [[ raid5f == daos ]] 00:21:28.654 21:48:29 blockdev_raid5f -- bdev/blockdev.sh@34 -- # [[ raid5f = \g\p\t ]] 00:21:28.654 21:48:29 blockdev_raid5f -- bdev/blockdev.sh@40 -- # [[ raid5f == xnvme ]] 00:21:28.654 00:21:28.654 real 0m48.003s 00:21:28.654 user 1m5.078s 00:21:28.654 sys 0m4.645s 00:21:28.654 21:48:29 blockdev_raid5f -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:28.654 21:48:29 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:21:28.654 ************************************ 00:21:28.654 END TEST blockdev_raid5f 00:21:28.654 ************************************ 00:21:28.914 21:48:29 -- spdk/autotest.sh@194 -- # uname -s 00:21:28.914 21:48:29 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:21:28.914 21:48:29 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:21:28.914 21:48:29 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:21:28.914 21:48:29 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:21:28.914 21:48:29 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:21:28.914 21:48:29 -- spdk/autotest.sh@260 -- # timing_exit lib 00:21:28.914 21:48:29 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:28.914 21:48:29 -- common/autotest_common.sh@10 -- # set +x 00:21:28.914 21:48:29 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:21:28.914 21:48:29 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:21:28.914 21:48:29 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 00:21:28.914 21:48:29 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:21:28.914 21:48:29 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:21:28.914 21:48:29 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:21:28.914 21:48:29 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:21:28.914 21:48:29 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:21:28.914 21:48:29 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:21:28.914 21:48:29 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:21:28.914 21:48:29 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:21:28.914 21:48:29 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:21:28.914 21:48:29 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:21:28.914 21:48:29 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:21:28.914 21:48:29 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:21:28.914 21:48:29 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:21:28.914 21:48:29 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:21:28.914 21:48:29 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:21:28.914 21:48:29 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:21:28.914 21:48:29 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:21:28.914 21:48:29 -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:28.914 21:48:29 -- common/autotest_common.sh@10 -- # set +x 00:21:28.914 21:48:29 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:21:28.914 21:48:29 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:21:28.914 21:48:29 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:21:28.914 21:48:29 -- common/autotest_common.sh@10 -- # set +x 00:21:30.825 INFO: APP EXITING 00:21:30.826 INFO: killing all VMs 00:21:30.826 INFO: killing vhost app 00:21:30.826 INFO: EXIT DONE 00:21:31.395 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:31.395 Waiting for block devices as requested 00:21:31.654 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:21:31.654 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:21:32.592 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:32.592 Cleaning 00:21:32.592 Removing: /var/run/dpdk/spdk0/config 00:21:32.592 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:21:32.592 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:21:32.592 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:21:32.592 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:21:32.592 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:21:32.592 Removing: /var/run/dpdk/spdk0/hugepage_info 00:21:32.592 Removing: /dev/shm/spdk_tgt_trace.pid56929 00:21:32.592 Removing: /var/run/dpdk/spdk0 00:21:32.592 Removing: /var/run/dpdk/spdk_pid56683 00:21:32.592 Removing: /var/run/dpdk/spdk_pid56929 00:21:32.592 Removing: /var/run/dpdk/spdk_pid57163 00:21:32.592 Removing: /var/run/dpdk/spdk_pid57273 00:21:32.593 Removing: /var/run/dpdk/spdk_pid57329 00:21:32.593 Removing: /var/run/dpdk/spdk_pid57457 00:21:32.593 Removing: /var/run/dpdk/spdk_pid57481 00:21:32.593 Removing: /var/run/dpdk/spdk_pid57691 00:21:32.593 Removing: /var/run/dpdk/spdk_pid57802 00:21:32.593 Removing: /var/run/dpdk/spdk_pid57915 00:21:32.593 Removing: /var/run/dpdk/spdk_pid58037 00:21:32.593 Removing: /var/run/dpdk/spdk_pid58150 00:21:32.593 Removing: /var/run/dpdk/spdk_pid58190 00:21:32.593 Removing: /var/run/dpdk/spdk_pid58232 00:21:32.593 Removing: /var/run/dpdk/spdk_pid58302 00:21:32.593 Removing: /var/run/dpdk/spdk_pid58403 00:21:32.593 Removing: /var/run/dpdk/spdk_pid58872 00:21:32.593 Removing: /var/run/dpdk/spdk_pid58953 00:21:32.593 Removing: /var/run/dpdk/spdk_pid59029 00:21:32.593 Removing: /var/run/dpdk/spdk_pid59045 00:21:32.593 Removing: /var/run/dpdk/spdk_pid59196 00:21:32.593 Removing: /var/run/dpdk/spdk_pid59218 00:21:32.593 Removing: /var/run/dpdk/spdk_pid59377 00:21:32.593 Removing: /var/run/dpdk/spdk_pid59398 00:21:32.593 Removing: /var/run/dpdk/spdk_pid59468 00:21:32.593 Removing: /var/run/dpdk/spdk_pid59497 00:21:32.593 Removing: /var/run/dpdk/spdk_pid59561 00:21:32.593 Removing: /var/run/dpdk/spdk_pid59579 00:21:32.593 Removing: /var/run/dpdk/spdk_pid59795 00:21:32.593 Removing: /var/run/dpdk/spdk_pid59829 00:21:32.593 Removing: /var/run/dpdk/spdk_pid59918 00:21:32.593 Removing: /var/run/dpdk/spdk_pid61291 00:21:32.593 Removing: /var/run/dpdk/spdk_pid61497 00:21:32.593 Removing: /var/run/dpdk/spdk_pid61643 00:21:32.593 Removing: /var/run/dpdk/spdk_pid62297 00:21:32.593 Removing: /var/run/dpdk/spdk_pid62504 00:21:32.853 Removing: /var/run/dpdk/spdk_pid62650 00:21:32.853 Removing: /var/run/dpdk/spdk_pid63293 00:21:32.853 Removing: /var/run/dpdk/spdk_pid63630 00:21:32.853 Removing: /var/run/dpdk/spdk_pid63770 00:21:32.853 Removing: /var/run/dpdk/spdk_pid65176 00:21:32.853 Removing: /var/run/dpdk/spdk_pid65430 00:21:32.853 Removing: /var/run/dpdk/spdk_pid65576 00:21:32.853 Removing: /var/run/dpdk/spdk_pid66972 00:21:32.853 Removing: /var/run/dpdk/spdk_pid67225 00:21:32.853 Removing: /var/run/dpdk/spdk_pid67371 00:21:32.853 Removing: /var/run/dpdk/spdk_pid68756 00:21:32.853 Removing: /var/run/dpdk/spdk_pid69207 00:21:32.853 Removing: /var/run/dpdk/spdk_pid69353 00:21:32.853 Removing: /var/run/dpdk/spdk_pid70855 00:21:32.853 Removing: /var/run/dpdk/spdk_pid71116 00:21:32.853 Removing: /var/run/dpdk/spdk_pid71267 00:21:32.853 Removing: /var/run/dpdk/spdk_pid72768 00:21:32.853 Removing: /var/run/dpdk/spdk_pid73038 00:21:32.853 Removing: /var/run/dpdk/spdk_pid73189 00:21:32.853 Removing: /var/run/dpdk/spdk_pid74680 00:21:32.853 Removing: /var/run/dpdk/spdk_pid75173 00:21:32.853 Removing: /var/run/dpdk/spdk_pid75316 00:21:32.853 Removing: /var/run/dpdk/spdk_pid75462 00:21:32.853 Removing: /var/run/dpdk/spdk_pid75888 00:21:32.853 Removing: /var/run/dpdk/spdk_pid76628 00:21:32.853 Removing: /var/run/dpdk/spdk_pid77025 00:21:32.853 Removing: /var/run/dpdk/spdk_pid77714 00:21:32.853 Removing: /var/run/dpdk/spdk_pid78160 00:21:32.853 Removing: /var/run/dpdk/spdk_pid78919 00:21:32.853 Removing: /var/run/dpdk/spdk_pid79334 00:21:32.853 Removing: /var/run/dpdk/spdk_pid81309 00:21:32.853 Removing: /var/run/dpdk/spdk_pid81753 00:21:32.853 Removing: /var/run/dpdk/spdk_pid82208 00:21:32.853 Removing: /var/run/dpdk/spdk_pid84298 00:21:32.853 Removing: /var/run/dpdk/spdk_pid84782 00:21:32.853 Removing: /var/run/dpdk/spdk_pid85294 00:21:32.853 Removing: /var/run/dpdk/spdk_pid86351 00:21:32.853 Removing: /var/run/dpdk/spdk_pid86668 00:21:32.853 Removing: /var/run/dpdk/spdk_pid87605 00:21:32.853 Removing: /var/run/dpdk/spdk_pid87929 00:21:32.853 Removing: /var/run/dpdk/spdk_pid88862 00:21:32.853 Removing: /var/run/dpdk/spdk_pid89195 00:21:32.853 Removing: /var/run/dpdk/spdk_pid89872 00:21:32.853 Removing: /var/run/dpdk/spdk_pid90146 00:21:32.853 Removing: /var/run/dpdk/spdk_pid90219 00:21:32.853 Removing: /var/run/dpdk/spdk_pid90261 00:21:32.853 Removing: /var/run/dpdk/spdk_pid90503 00:21:32.853 Removing: /var/run/dpdk/spdk_pid90682 00:21:32.853 Removing: /var/run/dpdk/spdk_pid90779 00:21:32.853 Removing: /var/run/dpdk/spdk_pid90873 00:21:32.853 Removing: /var/run/dpdk/spdk_pid90932 00:21:32.853 Removing: /var/run/dpdk/spdk_pid90957 00:21:32.853 Clean 00:21:32.853 21:48:33 -- common/autotest_common.sh@1453 -- # return 0 00:21:32.853 21:48:33 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:21:32.853 21:48:33 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:32.853 21:48:33 -- common/autotest_common.sh@10 -- # set +x 00:21:33.112 21:48:33 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:21:33.112 21:48:33 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:33.112 21:48:33 -- common/autotest_common.sh@10 -- # set +x 00:21:33.112 21:48:33 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:21:33.112 21:48:33 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:21:33.112 21:48:33 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:21:33.112 21:48:33 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:21:33.112 21:48:33 -- spdk/autotest.sh@398 -- # hostname 00:21:33.112 21:48:33 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:21:33.372 geninfo: WARNING: invalid characters removed from testname! 00:21:55.318 21:48:54 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:21:56.699 21:48:57 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:21:58.607 21:48:59 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:00.518 21:49:01 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:02.427 21:49:03 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:04.965 21:49:05 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:06.867 21:49:07 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:22:06.867 21:49:07 -- spdk/autorun.sh@1 -- $ timing_finish 00:22:06.867 21:49:07 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:22:06.867 21:49:07 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:22:06.867 21:49:07 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:22:06.867 21:49:07 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:22:06.867 + [[ -n 5421 ]] 00:22:06.867 + sudo kill 5421 00:22:06.877 [Pipeline] } 00:22:06.893 [Pipeline] // timeout 00:22:06.899 [Pipeline] } 00:22:06.915 [Pipeline] // stage 00:22:06.921 [Pipeline] } 00:22:06.939 [Pipeline] // catchError 00:22:06.949 [Pipeline] stage 00:22:06.951 [Pipeline] { (Stop VM) 00:22:06.963 [Pipeline] sh 00:22:07.243 + vagrant halt 00:22:09.775 ==> default: Halting domain... 00:22:17.915 [Pipeline] sh 00:22:18.200 + vagrant destroy -f 00:22:20.740 ==> default: Removing domain... 00:22:20.752 [Pipeline] sh 00:22:21.034 + mv output /var/jenkins/workspace/raid-vg-autotest/output 00:22:21.043 [Pipeline] } 00:22:21.058 [Pipeline] // stage 00:22:21.064 [Pipeline] } 00:22:21.079 [Pipeline] // dir 00:22:21.084 [Pipeline] } 00:22:21.100 [Pipeline] // wrap 00:22:21.106 [Pipeline] } 00:22:21.119 [Pipeline] // catchError 00:22:21.129 [Pipeline] stage 00:22:21.131 [Pipeline] { (Epilogue) 00:22:21.144 [Pipeline] sh 00:22:21.429 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:22:26.737 [Pipeline] catchError 00:22:26.739 [Pipeline] { 00:22:26.752 [Pipeline] sh 00:22:27.036 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:22:27.036 Artifacts sizes are good 00:22:27.045 [Pipeline] } 00:22:27.058 [Pipeline] // catchError 00:22:27.068 [Pipeline] archiveArtifacts 00:22:27.074 Archiving artifacts 00:22:27.174 [Pipeline] cleanWs 00:22:27.184 [WS-CLEANUP] Deleting project workspace... 00:22:27.184 [WS-CLEANUP] Deferred wipeout is used... 00:22:27.190 [WS-CLEANUP] done 00:22:27.191 [Pipeline] } 00:22:27.200 [Pipeline] // stage 00:22:27.203 [Pipeline] } 00:22:27.210 [Pipeline] // node 00:22:27.217 [Pipeline] End of Pipeline 00:22:27.246 Finished: SUCCESS